Jul 15 11:33:04.853576 kernel: Linux version 5.15.188-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Jul 15 10:04:37 -00 2025 Jul 15 11:33:04.853594 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:33:04.853602 kernel: BIOS-provided physical RAM map: Jul 15 11:33:04.853608 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 15 11:33:04.853613 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 15 11:33:04.853618 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 15 11:33:04.853625 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 15 11:33:04.853630 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 15 11:33:04.853637 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 15 11:33:04.853642 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 15 11:33:04.853648 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 15 11:33:04.853653 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 15 11:33:04.853658 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 15 11:33:04.853664 kernel: NX (Execute Disable) protection: active Jul 15 11:33:04.853672 kernel: SMBIOS 2.8 present. Jul 15 11:33:04.853678 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 15 11:33:04.853683 kernel: Hypervisor detected: KVM Jul 15 11:33:04.853689 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 11:33:04.853695 kernel: kvm-clock: cpu 0, msr 1e19b001, primary cpu clock Jul 15 11:33:04.853701 kernel: kvm-clock: using sched offset of 2406443681 cycles Jul 15 11:33:04.853707 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 11:33:04.853713 kernel: tsc: Detected 2794.750 MHz processor Jul 15 11:33:04.853720 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 11:33:04.853728 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 11:33:04.853734 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 15 11:33:04.853740 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 11:33:04.853746 kernel: Using GB pages for direct mapping Jul 15 11:33:04.853752 kernel: ACPI: Early table checksum verification disabled Jul 15 11:33:04.853758 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 15 11:33:04.853764 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:33:04.853770 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:33:04.853776 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:33:04.853783 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 15 11:33:04.853789 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:33:04.853795 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:33:04.853801 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:33:04.853807 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:33:04.853813 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 15 11:33:04.853819 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 15 11:33:04.853825 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 15 11:33:04.853835 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 15 11:33:04.853841 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 15 11:33:04.853848 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 15 11:33:04.853854 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 15 11:33:04.853860 kernel: No NUMA configuration found Jul 15 11:33:04.853867 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 15 11:33:04.853874 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 15 11:33:04.853881 kernel: Zone ranges: Jul 15 11:33:04.853887 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 11:33:04.853893 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 15 11:33:04.853900 kernel: Normal empty Jul 15 11:33:04.853906 kernel: Movable zone start for each node Jul 15 11:33:04.853912 kernel: Early memory node ranges Jul 15 11:33:04.853919 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 15 11:33:04.853925 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 15 11:33:04.853933 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 15 11:33:04.853939 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 11:33:04.853945 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 15 11:33:04.853952 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 15 11:33:04.853958 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 15 11:33:04.853965 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 11:33:04.853971 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 11:33:04.853977 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 15 11:33:04.853984 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 11:33:04.853990 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 11:33:04.853998 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 11:33:04.854018 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 11:33:04.854025 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 11:33:04.854031 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 11:33:04.854037 kernel: TSC deadline timer available Jul 15 11:33:04.854043 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 15 11:33:04.854050 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 15 11:33:04.854056 kernel: kvm-guest: setup PV sched yield Jul 15 11:33:04.854062 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 15 11:33:04.854071 kernel: Booting paravirtualized kernel on KVM Jul 15 11:33:04.854077 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 11:33:04.854084 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Jul 15 11:33:04.854090 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Jul 15 11:33:04.854097 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Jul 15 11:33:04.854103 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 15 11:33:04.854112 kernel: kvm-guest: setup async PF for cpu 0 Jul 15 11:33:04.854126 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Jul 15 11:33:04.854145 kernel: kvm-guest: PV spinlocks enabled Jul 15 11:33:04.854159 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 11:33:04.854166 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 15 11:33:04.854172 kernel: Policy zone: DMA32 Jul 15 11:33:04.854179 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:33:04.854186 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 11:33:04.854201 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 11:33:04.854208 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 11:33:04.854215 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 11:33:04.854224 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47476K init, 4104K bss, 134796K reserved, 0K cma-reserved) Jul 15 11:33:04.854231 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 11:33:04.854237 kernel: ftrace: allocating 34607 entries in 136 pages Jul 15 11:33:04.854244 kernel: ftrace: allocated 136 pages with 2 groups Jul 15 11:33:04.854250 kernel: rcu: Hierarchical RCU implementation. Jul 15 11:33:04.854257 kernel: rcu: RCU event tracing is enabled. Jul 15 11:33:04.854263 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 11:33:04.854270 kernel: Rude variant of Tasks RCU enabled. Jul 15 11:33:04.854276 kernel: Tracing variant of Tasks RCU enabled. Jul 15 11:33:04.854284 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 11:33:04.854291 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 11:33:04.854297 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 15 11:33:04.854303 kernel: random: crng init done Jul 15 11:33:04.854310 kernel: Console: colour VGA+ 80x25 Jul 15 11:33:04.854316 kernel: printk: console [ttyS0] enabled Jul 15 11:33:04.854323 kernel: ACPI: Core revision 20210730 Jul 15 11:33:04.854329 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 15 11:33:04.854336 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 11:33:04.854347 kernel: x2apic enabled Jul 15 11:33:04.854353 kernel: Switched APIC routing to physical x2apic. Jul 15 11:33:04.854360 kernel: kvm-guest: setup PV IPIs Jul 15 11:33:04.854366 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 15 11:33:04.854373 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 15 11:33:04.854379 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 15 11:33:04.854386 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 15 11:33:04.854392 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 15 11:33:04.854399 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 15 11:33:04.854411 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 11:33:04.854417 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 11:33:04.854424 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 11:33:04.854432 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 15 11:33:04.854439 kernel: RETBleed: Mitigation: untrained return thunk Jul 15 11:33:04.854446 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 11:33:04.854453 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 15 11:33:04.854460 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 11:33:04.854467 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 11:33:04.854474 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 11:33:04.854481 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 11:33:04.854488 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 15 11:33:04.854495 kernel: Freeing SMP alternatives memory: 32K Jul 15 11:33:04.854501 kernel: pid_max: default: 32768 minimum: 301 Jul 15 11:33:04.854508 kernel: LSM: Security Framework initializing Jul 15 11:33:04.854515 kernel: SELinux: Initializing. Jul 15 11:33:04.854521 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 11:33:04.854529 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 11:33:04.854536 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 15 11:33:04.854543 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 15 11:33:04.854550 kernel: ... version: 0 Jul 15 11:33:04.854556 kernel: ... bit width: 48 Jul 15 11:33:04.854563 kernel: ... generic registers: 6 Jul 15 11:33:04.854570 kernel: ... value mask: 0000ffffffffffff Jul 15 11:33:04.854576 kernel: ... max period: 00007fffffffffff Jul 15 11:33:04.854583 kernel: ... fixed-purpose events: 0 Jul 15 11:33:04.854591 kernel: ... event mask: 000000000000003f Jul 15 11:33:04.854597 kernel: signal: max sigframe size: 1776 Jul 15 11:33:04.854604 kernel: rcu: Hierarchical SRCU implementation. Jul 15 11:33:04.854611 kernel: smp: Bringing up secondary CPUs ... Jul 15 11:33:04.854617 kernel: x86: Booting SMP configuration: Jul 15 11:33:04.854624 kernel: .... node #0, CPUs: #1 Jul 15 11:33:04.854631 kernel: kvm-clock: cpu 1, msr 1e19b041, secondary cpu clock Jul 15 11:33:04.854637 kernel: kvm-guest: setup async PF for cpu 1 Jul 15 11:33:04.854644 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Jul 15 11:33:04.854652 kernel: #2 Jul 15 11:33:04.854659 kernel: kvm-clock: cpu 2, msr 1e19b081, secondary cpu clock Jul 15 11:33:04.854665 kernel: kvm-guest: setup async PF for cpu 2 Jul 15 11:33:04.854672 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Jul 15 11:33:04.854679 kernel: #3 Jul 15 11:33:04.854685 kernel: kvm-clock: cpu 3, msr 1e19b0c1, secondary cpu clock Jul 15 11:33:04.854692 kernel: kvm-guest: setup async PF for cpu 3 Jul 15 11:33:04.854699 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Jul 15 11:33:04.854705 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 11:33:04.854713 kernel: smpboot: Max logical packages: 1 Jul 15 11:33:04.854720 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 15 11:33:04.854727 kernel: devtmpfs: initialized Jul 15 11:33:04.854733 kernel: x86/mm: Memory block size: 128MB Jul 15 11:33:04.854740 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 11:33:04.854747 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 11:33:04.854754 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 11:33:04.854761 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 11:33:04.854767 kernel: audit: initializing netlink subsys (disabled) Jul 15 11:33:04.854775 kernel: audit: type=2000 audit(1752579184.520:1): state=initialized audit_enabled=0 res=1 Jul 15 11:33:04.854782 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 11:33:04.854788 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 11:33:04.854795 kernel: cpuidle: using governor menu Jul 15 11:33:04.854802 kernel: ACPI: bus type PCI registered Jul 15 11:33:04.854809 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 11:33:04.854815 kernel: dca service started, version 1.12.1 Jul 15 11:33:04.854822 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 15 11:33:04.854829 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Jul 15 11:33:04.854837 kernel: PCI: Using configuration type 1 for base access Jul 15 11:33:04.854843 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 11:33:04.854850 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 11:33:04.854857 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 11:33:04.854864 kernel: ACPI: Added _OSI(Module Device) Jul 15 11:33:04.854870 kernel: ACPI: Added _OSI(Processor Device) Jul 15 11:33:04.854877 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 11:33:04.854884 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 15 11:33:04.854890 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 15 11:33:04.854897 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 15 11:33:04.854905 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 11:33:04.854912 kernel: ACPI: Interpreter enabled Jul 15 11:33:04.854918 kernel: ACPI: PM: (supports S0 S3 S5) Jul 15 11:33:04.854925 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 11:33:04.854932 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 11:33:04.854938 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 15 11:33:04.854945 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 11:33:04.855074 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 11:33:04.855149 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 15 11:33:04.855226 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 15 11:33:04.855236 kernel: PCI host bridge to bus 0000:00 Jul 15 11:33:04.855308 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 11:33:04.855368 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 11:33:04.855428 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 11:33:04.855499 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 15 11:33:04.855569 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 15 11:33:04.855630 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 15 11:33:04.855694 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 11:33:04.855773 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 15 11:33:04.855853 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 15 11:33:04.855923 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 15 11:33:04.855993 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 15 11:33:04.856082 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 15 11:33:04.856150 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 11:33:04.856255 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 15 11:33:04.856327 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 15 11:33:04.856405 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 15 11:33:04.856475 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 15 11:33:04.856553 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 15 11:33:04.856621 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 15 11:33:04.856688 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 15 11:33:04.856754 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 15 11:33:04.856851 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 15 11:33:04.856921 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 15 11:33:04.856988 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 15 11:33:04.857072 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 15 11:33:04.857139 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 15 11:33:04.857220 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 15 11:33:04.857291 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 15 11:33:04.857365 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 15 11:33:04.857433 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 15 11:33:04.857502 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 15 11:33:04.857579 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 15 11:33:04.857647 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 15 11:33:04.857656 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 11:33:04.857663 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 11:33:04.857670 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 11:33:04.857678 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 11:33:04.857686 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 15 11:33:04.857698 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 15 11:33:04.857706 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 15 11:33:04.857713 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 15 11:33:04.857720 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 15 11:33:04.857726 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 15 11:33:04.857733 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 15 11:33:04.857740 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 15 11:33:04.857747 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 15 11:33:04.857753 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 15 11:33:04.857761 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 15 11:33:04.857768 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 15 11:33:04.857775 kernel: iommu: Default domain type: Translated Jul 15 11:33:04.857782 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 11:33:04.857852 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 15 11:33:04.857919 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 11:33:04.857985 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 15 11:33:04.857994 kernel: vgaarb: loaded Jul 15 11:33:04.858025 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 15 11:33:04.858035 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 15 11:33:04.858042 kernel: PTP clock support registered Jul 15 11:33:04.858048 kernel: PCI: Using ACPI for IRQ routing Jul 15 11:33:04.858056 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 11:33:04.858064 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 15 11:33:04.858073 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 15 11:33:04.858082 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 15 11:33:04.858091 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 15 11:33:04.858099 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 11:33:04.858107 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 11:33:04.858114 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 11:33:04.858121 kernel: pnp: PnP ACPI init Jul 15 11:33:04.858208 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 15 11:33:04.858218 kernel: pnp: PnP ACPI: found 6 devices Jul 15 11:33:04.858225 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 11:33:04.858233 kernel: NET: Registered PF_INET protocol family Jul 15 11:33:04.858240 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 11:33:04.858249 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 11:33:04.858256 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 11:33:04.858263 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 11:33:04.858270 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 15 11:33:04.858277 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 11:33:04.858283 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 11:33:04.858290 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 11:33:04.858297 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 11:33:04.858304 kernel: NET: Registered PF_XDP protocol family Jul 15 11:33:04.858368 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 11:33:04.858427 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 11:33:04.858487 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 11:33:04.858546 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 15 11:33:04.858606 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 15 11:33:04.858665 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 15 11:33:04.858674 kernel: PCI: CLS 0 bytes, default 64 Jul 15 11:33:04.858681 kernel: Initialise system trusted keyrings Jul 15 11:33:04.858691 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 11:33:04.858698 kernel: Key type asymmetric registered Jul 15 11:33:04.858704 kernel: Asymmetric key parser 'x509' registered Jul 15 11:33:04.858711 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 11:33:04.858718 kernel: io scheduler mq-deadline registered Jul 15 11:33:04.858725 kernel: io scheduler kyber registered Jul 15 11:33:04.858732 kernel: io scheduler bfq registered Jul 15 11:33:04.858738 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 11:33:04.858745 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 15 11:33:04.858754 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 15 11:33:04.858760 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 15 11:33:04.858769 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 11:33:04.858778 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 11:33:04.858787 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 11:33:04.858795 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 11:33:04.858801 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 11:33:04.858880 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 15 11:33:04.858891 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 11:33:04.858956 kernel: rtc_cmos 00:04: registered as rtc0 Jul 15 11:33:04.859053 kernel: rtc_cmos 00:04: setting system clock to 2025-07-15T11:33:04 UTC (1752579184) Jul 15 11:33:04.859116 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 15 11:33:04.859125 kernel: NET: Registered PF_INET6 protocol family Jul 15 11:33:04.859132 kernel: Segment Routing with IPv6 Jul 15 11:33:04.859139 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 11:33:04.859146 kernel: NET: Registered PF_PACKET protocol family Jul 15 11:33:04.859153 kernel: Key type dns_resolver registered Jul 15 11:33:04.859174 kernel: IPI shorthand broadcast: enabled Jul 15 11:33:04.859182 kernel: sched_clock: Marking stable (388003070, 97623434)->(533247899, -47621395) Jul 15 11:33:04.859189 kernel: registered taskstats version 1 Jul 15 11:33:04.859203 kernel: Loading compiled-in X.509 certificates Jul 15 11:33:04.859210 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.188-flatcar: c4b3a19d3bd6de5654dc12075428550cf6251289' Jul 15 11:33:04.859216 kernel: Key type .fscrypt registered Jul 15 11:33:04.859223 kernel: Key type fscrypt-provisioning registered Jul 15 11:33:04.859231 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 11:33:04.859237 kernel: ima: Allocated hash algorithm: sha1 Jul 15 11:33:04.859257 kernel: ima: No architecture policies found Jul 15 11:33:04.859264 kernel: clk: Disabling unused clocks Jul 15 11:33:04.859271 kernel: Freeing unused kernel image (initmem) memory: 47476K Jul 15 11:33:04.859278 kernel: Write protecting the kernel read-only data: 28672k Jul 15 11:33:04.859285 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 15 11:33:04.859292 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Jul 15 11:33:04.859298 kernel: Run /init as init process Jul 15 11:33:04.859313 kernel: with arguments: Jul 15 11:33:04.859323 kernel: /init Jul 15 11:33:04.859332 kernel: with environment: Jul 15 11:33:04.859338 kernel: HOME=/ Jul 15 11:33:04.859345 kernel: TERM=linux Jul 15 11:33:04.859351 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 11:33:04.859361 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 15 11:33:04.859370 systemd[1]: Detected virtualization kvm. Jul 15 11:33:04.859392 systemd[1]: Detected architecture x86-64. Jul 15 11:33:04.859404 systemd[1]: Running in initrd. Jul 15 11:33:04.859412 systemd[1]: No hostname configured, using default hostname. Jul 15 11:33:04.859419 systemd[1]: Hostname set to . Jul 15 11:33:04.859427 systemd[1]: Initializing machine ID from VM UUID. Jul 15 11:33:04.859434 systemd[1]: Queued start job for default target initrd.target. Jul 15 11:33:04.859441 systemd[1]: Started systemd-ask-password-console.path. Jul 15 11:33:04.859459 systemd[1]: Reached target cryptsetup.target. Jul 15 11:33:04.859467 systemd[1]: Reached target paths.target. Jul 15 11:33:04.859474 systemd[1]: Reached target slices.target. Jul 15 11:33:04.859483 systemd[1]: Reached target swap.target. Jul 15 11:33:04.859496 systemd[1]: Reached target timers.target. Jul 15 11:33:04.859516 systemd[1]: Listening on iscsid.socket. Jul 15 11:33:04.859524 systemd[1]: Listening on iscsiuio.socket. Jul 15 11:33:04.859532 systemd[1]: Listening on systemd-journald-audit.socket. Jul 15 11:33:04.859541 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 15 11:33:04.859548 systemd[1]: Listening on systemd-journald.socket. Jul 15 11:33:04.859556 systemd[1]: Listening on systemd-networkd.socket. Jul 15 11:33:04.859563 systemd[1]: Listening on systemd-udevd-control.socket. Jul 15 11:33:04.859582 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 15 11:33:04.859590 systemd[1]: Reached target sockets.target. Jul 15 11:33:04.859597 systemd[1]: Starting kmod-static-nodes.service... Jul 15 11:33:04.859605 systemd[1]: Finished network-cleanup.service. Jul 15 11:33:04.859612 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 11:33:04.859622 systemd[1]: Starting systemd-journald.service... Jul 15 11:33:04.859629 systemd[1]: Starting systemd-modules-load.service... Jul 15 11:33:04.859637 systemd[1]: Starting systemd-resolved.service... Jul 15 11:33:04.859644 systemd[1]: Starting systemd-vconsole-setup.service... Jul 15 11:33:04.859651 systemd[1]: Finished kmod-static-nodes.service. Jul 15 11:33:04.859659 kernel: audit: type=1130 audit(1752579184.853:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:04.859667 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 11:33:04.859677 systemd-journald[198]: Journal started Jul 15 11:33:04.859726 systemd-journald[198]: Runtime Journal (/run/log/journal/992d811e58d84463b11295c1ebf4a1ad) is 6.0M, max 48.5M, 42.5M free. Jul 15 11:33:04.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:04.851982 systemd-modules-load[199]: Inserted module 'overlay' Jul 15 11:33:04.893903 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 11:33:04.893933 kernel: Bridge firewalling registered Jul 15 11:33:04.893944 kernel: audit: type=1130 audit(1752579184.890:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:04.893958 systemd[1]: Started systemd-journald.service. Jul 15 11:33:04.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:04.869137 systemd-resolved[200]: Positive Trust Anchors: Jul 15 11:33:04.869147 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 11:33:04.900107 kernel: audit: type=1130 audit(1752579184.894:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:04.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:04.869173 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 15 11:33:04.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:04.871283 systemd-resolved[200]: Defaulting to hostname 'linux'. Jul 15 11:33:04.913774 kernel: audit: type=1130 audit(1752579184.900:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:04.913796 kernel: audit: type=1130 audit(1752579184.909:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:04.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:04.889457 systemd-modules-load[199]: Inserted module 'br_netfilter' Jul 15 11:33:04.916739 kernel: SCSI subsystem initialized Jul 15 11:33:04.895148 systemd[1]: Started systemd-resolved.service. Jul 15 11:33:04.901082 systemd[1]: Finished systemd-vconsole-setup.service. Jul 15 11:33:04.909609 systemd[1]: Reached target nss-lookup.target. Jul 15 11:33:04.914552 systemd[1]: Starting dracut-cmdline-ask.service... Jul 15 11:33:04.915805 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 15 11:33:04.925552 kernel: audit: type=1130 audit(1752579184.921:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:04.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:04.920669 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 15 11:33:04.930091 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 11:33:04.930105 kernel: device-mapper: uevent: version 1.0.3 Jul 15 11:33:04.930113 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 15 11:33:04.932332 systemd[1]: Finished dracut-cmdline-ask.service. Jul 15 11:33:04.937270 kernel: audit: type=1130 audit(1752579184.932:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:04.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:04.933222 systemd[1]: Starting dracut-cmdline.service... Jul 15 11:33:04.937183 systemd-modules-load[199]: Inserted module 'dm_multipath' Jul 15 11:33:04.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:04.943260 kernel: audit: type=1130 audit(1752579184.938:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:04.937960 systemd[1]: Finished systemd-modules-load.service. Jul 15 11:33:04.939415 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:33:04.945033 dracut-cmdline[216]: dracut-dracut-053 Jul 15 11:33:04.946355 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:33:04.951794 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:33:04.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:04.957030 kernel: audit: type=1130 audit(1752579184.953:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:05.005029 kernel: Loading iSCSI transport class v2.0-870. Jul 15 11:33:05.021034 kernel: iscsi: registered transport (tcp) Jul 15 11:33:05.042040 kernel: iscsi: registered transport (qla4xxx) Jul 15 11:33:05.042057 kernel: QLogic iSCSI HBA Driver Jul 15 11:33:05.070154 systemd[1]: Finished dracut-cmdline.service. Jul 15 11:33:05.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:05.071921 systemd[1]: Starting dracut-pre-udev.service... Jul 15 11:33:05.121036 kernel: raid6: avx2x4 gen() 27008 MB/s Jul 15 11:33:05.138026 kernel: raid6: avx2x4 xor() 7308 MB/s Jul 15 11:33:05.155029 kernel: raid6: avx2x2 gen() 32128 MB/s Jul 15 11:33:05.172048 kernel: raid6: avx2x2 xor() 19033 MB/s Jul 15 11:33:05.189035 kernel: raid6: avx2x1 gen() 26614 MB/s Jul 15 11:33:05.206040 kernel: raid6: avx2x1 xor() 15343 MB/s Jul 15 11:33:05.223032 kernel: raid6: sse2x4 gen() 14780 MB/s Jul 15 11:33:05.240039 kernel: raid6: sse2x4 xor() 6884 MB/s Jul 15 11:33:05.257044 kernel: raid6: sse2x2 gen() 15459 MB/s Jul 15 11:33:05.274053 kernel: raid6: sse2x2 xor() 9708 MB/s Jul 15 11:33:05.291061 kernel: raid6: sse2x1 gen() 11970 MB/s Jul 15 11:33:05.308440 kernel: raid6: sse2x1 xor() 7697 MB/s Jul 15 11:33:05.308512 kernel: raid6: using algorithm avx2x2 gen() 32128 MB/s Jul 15 11:33:05.308535 kernel: raid6: .... xor() 19033 MB/s, rmw enabled Jul 15 11:33:05.309108 kernel: raid6: using avx2x2 recovery algorithm Jul 15 11:33:05.321038 kernel: xor: automatically using best checksumming function avx Jul 15 11:33:05.409045 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 15 11:33:05.417916 systemd[1]: Finished dracut-pre-udev.service. Jul 15 11:33:05.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:05.419000 audit: BPF prog-id=7 op=LOAD Jul 15 11:33:05.419000 audit: BPF prog-id=8 op=LOAD Jul 15 11:33:05.419837 systemd[1]: Starting systemd-udevd.service... Jul 15 11:33:05.431885 systemd-udevd[400]: Using default interface naming scheme 'v252'. Jul 15 11:33:05.435815 systemd[1]: Started systemd-udevd.service. Jul 15 11:33:05.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:05.436488 systemd[1]: Starting dracut-pre-trigger.service... Jul 15 11:33:05.446322 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Jul 15 11:33:05.470857 systemd[1]: Finished dracut-pre-trigger.service. Jul 15 11:33:05.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:05.473286 systemd[1]: Starting systemd-udev-trigger.service... Jul 15 11:33:05.504488 systemd[1]: Finished systemd-udev-trigger.service. Jul 15 11:33:05.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:05.540490 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 11:33:05.554171 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 11:33:05.554202 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 11:33:05.554214 kernel: GPT:9289727 != 19775487 Jul 15 11:33:05.554225 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 11:33:05.554236 kernel: GPT:9289727 != 19775487 Jul 15 11:33:05.554246 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 11:33:05.554257 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:33:05.555030 kernel: libata version 3.00 loaded. Jul 15 11:33:05.573506 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 15 11:33:05.613011 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (442) Jul 15 11:33:05.613035 kernel: ahci 0000:00:1f.2: version 3.0 Jul 15 11:33:05.613162 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 15 11:33:05.613182 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 15 11:33:05.613261 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 15 11:33:05.613339 kernel: AVX2 version of gcm_enc/dec engaged. Jul 15 11:33:05.613348 kernel: AES CTR mode by8 optimization enabled Jul 15 11:33:05.613357 kernel: scsi host0: ahci Jul 15 11:33:05.613447 kernel: scsi host1: ahci Jul 15 11:33:05.613528 kernel: scsi host2: ahci Jul 15 11:33:05.613612 kernel: scsi host3: ahci Jul 15 11:33:05.613692 kernel: scsi host4: ahci Jul 15 11:33:05.613774 kernel: scsi host5: ahci Jul 15 11:33:05.613856 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 15 11:33:05.613866 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 15 11:33:05.613875 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 15 11:33:05.613884 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 15 11:33:05.613892 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 15 11:33:05.613901 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 15 11:33:05.617194 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 15 11:33:05.619604 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 15 11:33:05.619661 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 15 11:33:05.625247 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 15 11:33:05.626949 systemd[1]: Starting disk-uuid.service... Jul 15 11:33:05.635686 disk-uuid[524]: Primary Header is updated. Jul 15 11:33:05.635686 disk-uuid[524]: Secondary Entries is updated. Jul 15 11:33:05.635686 disk-uuid[524]: Secondary Header is updated. Jul 15 11:33:05.639022 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:33:05.642026 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:33:05.646025 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:33:05.896335 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 15 11:33:05.896400 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 15 11:33:05.896410 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 15 11:33:05.898132 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 15 11:33:05.898209 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 15 11:33:05.899024 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 15 11:33:05.900363 kernel: ata3.00: applying bridge limits Jul 15 11:33:05.901023 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 15 11:33:05.902023 kernel: ata3.00: configured for UDMA/100 Jul 15 11:33:05.905312 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 15 11:33:05.934033 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 15 11:33:05.950516 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 15 11:33:05.950531 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 15 11:33:06.646079 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:33:06.646135 disk-uuid[525]: The operation has completed successfully. Jul 15 11:33:06.665824 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 11:33:06.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:06.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:06.665903 systemd[1]: Finished disk-uuid.service. Jul 15 11:33:06.672550 systemd[1]: Starting verity-setup.service... Jul 15 11:33:06.685036 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 15 11:33:06.702823 systemd[1]: Found device dev-mapper-usr.device. Jul 15 11:33:06.704724 systemd[1]: Mounting sysusr-usr.mount... Jul 15 11:33:06.706675 systemd[1]: Finished verity-setup.service. Jul 15 11:33:06.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:06.762040 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 15 11:33:06.762433 systemd[1]: Mounted sysusr-usr.mount. Jul 15 11:33:06.763240 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 15 11:33:06.763856 systemd[1]: Starting ignition-setup.service... Jul 15 11:33:06.766211 systemd[1]: Starting parse-ip-for-networkd.service... Jul 15 11:33:06.774755 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 11:33:06.774807 kernel: BTRFS info (device vda6): using free space tree Jul 15 11:33:06.774816 kernel: BTRFS info (device vda6): has skinny extents Jul 15 11:33:06.782595 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 15 11:33:06.789787 systemd[1]: Finished ignition-setup.service. Jul 15 11:33:06.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:06.791224 systemd[1]: Starting ignition-fetch-offline.service... Jul 15 11:33:06.823194 ignition[646]: Ignition 2.14.0 Jul 15 11:33:06.823203 ignition[646]: Stage: fetch-offline Jul 15 11:33:06.823245 ignition[646]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:33:06.823252 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:33:06.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:06.827000 audit: BPF prog-id=9 op=LOAD Jul 15 11:33:06.825681 systemd[1]: Finished parse-ip-for-networkd.service. Jul 15 11:33:06.823332 ignition[646]: parsed url from cmdline: "" Jul 15 11:33:06.828481 systemd[1]: Starting systemd-networkd.service... Jul 15 11:33:06.823335 ignition[646]: no config URL provided Jul 15 11:33:06.823339 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 11:33:06.823345 ignition[646]: no config at "/usr/lib/ignition/user.ign" Jul 15 11:33:06.823360 ignition[646]: op(1): [started] loading QEMU firmware config module Jul 15 11:33:06.823364 ignition[646]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 11:33:06.829637 ignition[646]: op(1): [finished] loading QEMU firmware config module Jul 15 11:33:06.829656 ignition[646]: QEMU firmware config was not found. Ignoring... Jul 15 11:33:06.872072 ignition[646]: parsing config with SHA512: f089f20249109a86d485959cd0f33474c5cd711008537923ae183bea3cc6967e5baee310b2f865dd86907691094e90fa99d51772ac74b89acb9ea52c536f3430 Jul 15 11:33:06.877909 unknown[646]: fetched base config from "system" Jul 15 11:33:06.877918 unknown[646]: fetched user config from "qemu" Jul 15 11:33:06.878354 ignition[646]: fetch-offline: fetch-offline passed Jul 15 11:33:06.878397 ignition[646]: Ignition finished successfully Jul 15 11:33:06.882275 systemd[1]: Finished ignition-fetch-offline.service. Jul 15 11:33:06.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:06.891626 systemd-networkd[720]: lo: Link UP Jul 15 11:33:06.891635 systemd-networkd[720]: lo: Gained carrier Jul 15 11:33:06.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:06.892014 systemd-networkd[720]: Enumeration completed Jul 15 11:33:06.892097 systemd[1]: Started systemd-networkd.service. Jul 15 11:33:06.892205 systemd-networkd[720]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 11:33:06.893770 systemd[1]: Reached target network.target. Jul 15 11:33:06.894366 systemd-networkd[720]: eth0: Link UP Jul 15 11:33:06.894369 systemd-networkd[720]: eth0: Gained carrier Jul 15 11:33:06.895329 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 11:33:06.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:06.904253 ignition[722]: Ignition 2.14.0 Jul 15 11:33:06.896017 systemd[1]: Starting ignition-kargs.service... Jul 15 11:33:06.904258 ignition[722]: Stage: kargs Jul 15 11:33:06.897375 systemd[1]: Starting iscsiuio.service... Jul 15 11:33:06.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:06.904334 ignition[722]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:33:06.901623 systemd[1]: Started iscsiuio.service. Jul 15 11:33:06.904342 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:33:06.903187 systemd[1]: Starting iscsid.service... Jul 15 11:33:06.905341 ignition[722]: kargs: kargs passed Jul 15 11:33:06.907606 systemd[1]: Finished ignition-kargs.service. Jul 15 11:33:06.905373 ignition[722]: Ignition finished successfully Jul 15 11:33:06.909401 systemd[1]: Starting ignition-disks.service... Jul 15 11:33:06.916707 iscsid[730]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 15 11:33:06.916707 iscsid[730]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 15 11:33:06.916707 iscsid[730]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 15 11:33:06.916707 iscsid[730]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 15 11:33:06.916707 iscsid[730]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 15 11:33:06.916707 iscsid[730]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 15 11:33:06.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:06.912080 systemd-networkd[720]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 11:33:06.916463 ignition[731]: Ignition 2.14.0 Jul 15 11:33:06.918058 systemd[1]: Finished ignition-disks.service. Jul 15 11:33:06.916469 ignition[731]: Stage: disks Jul 15 11:33:06.918204 systemd[1]: Reached target initrd-root-device.target. Jul 15 11:33:06.916547 ignition[731]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:33:06.923539 systemd[1]: Reached target local-fs-pre.target. Jul 15 11:33:06.916556 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:33:06.926909 systemd[1]: Reached target local-fs.target. Jul 15 11:33:06.917429 ignition[731]: disks: disks passed Jul 15 11:33:06.928797 systemd[1]: Reached target sysinit.target. Jul 15 11:33:06.917462 ignition[731]: Ignition finished successfully Jul 15 11:33:06.930641 systemd[1]: Reached target basic.target. Jul 15 11:33:06.940955 systemd[1]: Started iscsid.service. Jul 15 11:33:06.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:06.942774 systemd[1]: Starting dracut-initqueue.service... Jul 15 11:33:06.951831 systemd[1]: Finished dracut-initqueue.service. Jul 15 11:33:06.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:06.952726 systemd[1]: Reached target remote-fs-pre.target. Jul 15 11:33:06.954185 systemd[1]: Reached target remote-cryptsetup.target. Jul 15 11:33:06.955047 systemd[1]: Reached target remote-fs.target. Jul 15 11:33:06.957130 systemd[1]: Starting dracut-pre-mount.service... Jul 15 11:33:06.963735 systemd[1]: Finished dracut-pre-mount.service. Jul 15 11:33:06.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:06.964403 systemd[1]: Starting systemd-fsck-root.service... Jul 15 11:33:06.974061 systemd-fsck[752]: ROOT: clean, 619/553520 files, 56023/553472 blocks Jul 15 11:33:06.978998 systemd[1]: Finished systemd-fsck-root.service. Jul 15 11:33:06.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:06.981804 systemd[1]: Mounting sysroot.mount... Jul 15 11:33:06.988022 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 15 11:33:06.988458 systemd[1]: Mounted sysroot.mount. Jul 15 11:33:06.988585 systemd[1]: Reached target initrd-root-fs.target. Jul 15 11:33:06.989614 systemd[1]: Mounting sysroot-usr.mount... Jul 15 11:33:06.990204 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 15 11:33:06.990234 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 11:33:06.990252 systemd[1]: Reached target ignition-diskful.target. Jul 15 11:33:06.998971 initrd-setup-root[762]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 11:33:06.991992 systemd[1]: Mounted sysroot-usr.mount. Jul 15 11:33:06.993944 systemd[1]: Starting initrd-setup-root.service... Jul 15 11:33:07.001971 initrd-setup-root[770]: cut: /sysroot/etc/group: No such file or directory Jul 15 11:33:07.004805 initrd-setup-root[778]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 11:33:07.007790 initrd-setup-root[786]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 11:33:07.030452 systemd[1]: Finished initrd-setup-root.service. Jul 15 11:33:07.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:07.031905 systemd[1]: Starting ignition-mount.service... Jul 15 11:33:07.033163 systemd[1]: Starting sysroot-boot.service... Jul 15 11:33:07.036731 bash[803]: umount: /sysroot/usr/share/oem: not mounted. Jul 15 11:33:07.044188 ignition[804]: INFO : Ignition 2.14.0 Jul 15 11:33:07.044188 ignition[804]: INFO : Stage: mount Jul 15 11:33:07.045811 ignition[804]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:33:07.045811 ignition[804]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:33:07.045811 ignition[804]: INFO : mount: mount passed Jul 15 11:33:07.045811 ignition[804]: INFO : Ignition finished successfully Jul 15 11:33:07.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:07.045952 systemd[1]: Finished ignition-mount.service. Jul 15 11:33:07.051738 systemd[1]: Finished sysroot-boot.service. Jul 15 11:33:07.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:07.713994 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 15 11:33:07.721031 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Jul 15 11:33:07.723710 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 11:33:07.723723 kernel: BTRFS info (device vda6): using free space tree Jul 15 11:33:07.723732 kernel: BTRFS info (device vda6): has skinny extents Jul 15 11:33:07.726804 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 15 11:33:07.729071 systemd[1]: Starting ignition-files.service... Jul 15 11:33:07.743181 ignition[833]: INFO : Ignition 2.14.0 Jul 15 11:33:07.743181 ignition[833]: INFO : Stage: files Jul 15 11:33:07.744942 ignition[833]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:33:07.744942 ignition[833]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:33:07.744942 ignition[833]: DEBUG : files: compiled without relabeling support, skipping Jul 15 11:33:07.748663 ignition[833]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 11:33:07.748663 ignition[833]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 11:33:07.748663 ignition[833]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 11:33:07.748663 ignition[833]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 11:33:07.748663 ignition[833]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 11:33:07.748288 unknown[833]: wrote ssh authorized keys file for user: core Jul 15 11:33:07.756583 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 15 11:33:07.756583 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 15 11:33:07.798373 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 11:33:07.954856 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 15 11:33:07.956983 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 11:33:07.956983 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 15 11:33:08.441299 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 11:33:08.532544 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 11:33:08.532544 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 11:33:08.536223 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 11:33:08.536223 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 11:33:08.536223 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 11:33:08.536223 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 11:33:08.536223 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 11:33:08.536223 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 11:33:08.536223 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 11:33:08.536223 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 11:33:08.536223 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 11:33:08.536223 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 11:33:08.536223 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 11:33:08.536223 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 11:33:08.536223 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 15 11:33:08.537134 systemd-networkd[720]: eth0: Gained IPv6LL Jul 15 11:33:09.036476 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 15 11:33:09.735218 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 11:33:09.735218 ignition[833]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 15 11:33:09.740132 ignition[833]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 11:33:09.740132 ignition[833]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 11:33:09.740132 ignition[833]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 15 11:33:09.740132 ignition[833]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 15 11:33:09.740132 ignition[833]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 11:33:09.740132 ignition[833]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 11:33:09.740132 ignition[833]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 15 11:33:09.740132 ignition[833]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 15 11:33:09.740132 ignition[833]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 11:33:09.740132 ignition[833]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 11:33:09.740132 ignition[833]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 11:33:09.764046 ignition[833]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 11:33:09.765676 ignition[833]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 11:33:09.765676 ignition[833]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 11:33:09.765676 ignition[833]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 11:33:09.765676 ignition[833]: INFO : files: files passed Jul 15 11:33:09.765676 ignition[833]: INFO : Ignition finished successfully Jul 15 11:33:09.788854 kernel: kauditd_printk_skb: 25 callbacks suppressed Jul 15 11:33:09.788875 kernel: audit: type=1130 audit(1752579189.766:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.788886 kernel: audit: type=1130 audit(1752579189.777:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.788896 kernel: audit: type=1130 audit(1752579189.781:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.788908 kernel: audit: type=1131 audit(1752579189.781:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.765572 systemd[1]: Finished ignition-files.service. Jul 15 11:33:09.767333 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 15 11:33:09.772284 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 15 11:33:09.793473 initrd-setup-root-after-ignition[857]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 15 11:33:09.772990 systemd[1]: Starting ignition-quench.service... Jul 15 11:33:09.795811 initrd-setup-root-after-ignition[859]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 11:33:09.774750 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 15 11:33:09.777499 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 11:33:09.777579 systemd[1]: Finished ignition-quench.service. Jul 15 11:33:09.781886 systemd[1]: Reached target ignition-complete.target. Jul 15 11:33:09.789490 systemd[1]: Starting initrd-parse-etc.service... Jul 15 11:33:09.801946 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 11:33:09.802030 systemd[1]: Finished initrd-parse-etc.service. Jul 15 11:33:09.810868 kernel: audit: type=1130 audit(1752579189.803:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.810882 kernel: audit: type=1131 audit(1752579189.803:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.803753 systemd[1]: Reached target initrd-fs.target. Jul 15 11:33:09.810875 systemd[1]: Reached target initrd.target. Jul 15 11:33:09.811623 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 15 11:33:09.812218 systemd[1]: Starting dracut-pre-pivot.service... Jul 15 11:33:09.821557 systemd[1]: Finished dracut-pre-pivot.service. Jul 15 11:33:09.826525 kernel: audit: type=1130 audit(1752579189.822:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.822894 systemd[1]: Starting initrd-cleanup.service... Jul 15 11:33:09.830872 systemd[1]: Stopped target nss-lookup.target. Jul 15 11:33:09.831758 systemd[1]: Stopped target remote-cryptsetup.target. Jul 15 11:33:09.833374 systemd[1]: Stopped target timers.target. Jul 15 11:33:09.834906 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 11:33:09.840941 kernel: audit: type=1131 audit(1752579189.836:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.834989 systemd[1]: Stopped dracut-pre-pivot.service. Jul 15 11:33:09.836453 systemd[1]: Stopped target initrd.target. Jul 15 11:33:09.841038 systemd[1]: Stopped target basic.target. Jul 15 11:33:09.842520 systemd[1]: Stopped target ignition-complete.target. Jul 15 11:33:09.844051 systemd[1]: Stopped target ignition-diskful.target. Jul 15 11:33:09.845543 systemd[1]: Stopped target initrd-root-device.target. Jul 15 11:33:09.847221 systemd[1]: Stopped target remote-fs.target. Jul 15 11:33:09.848750 systemd[1]: Stopped target remote-fs-pre.target. Jul 15 11:33:09.850379 systemd[1]: Stopped target sysinit.target. Jul 15 11:33:09.851821 systemd[1]: Stopped target local-fs.target. Jul 15 11:33:09.853329 systemd[1]: Stopped target local-fs-pre.target. Jul 15 11:33:09.854813 systemd[1]: Stopped target swap.target. Jul 15 11:33:09.862076 kernel: audit: type=1131 audit(1752579189.857:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.856193 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 11:33:09.856277 systemd[1]: Stopped dracut-pre-mount.service. Jul 15 11:33:09.868302 kernel: audit: type=1131 audit(1752579189.863:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.857772 systemd[1]: Stopped target cryptsetup.target. Jul 15 11:33:09.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.862110 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 11:33:09.862194 systemd[1]: Stopped dracut-initqueue.service. Jul 15 11:33:09.863883 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 11:33:09.863967 systemd[1]: Stopped ignition-fetch-offline.service. Jul 15 11:33:09.868411 systemd[1]: Stopped target paths.target. Jul 15 11:33:09.869821 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 11:33:09.874068 systemd[1]: Stopped systemd-ask-password-console.path. Jul 15 11:33:09.875022 systemd[1]: Stopped target slices.target. Jul 15 11:33:09.876717 systemd[1]: Stopped target sockets.target. Jul 15 11:33:09.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.878281 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 11:33:09.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.878366 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 15 11:33:09.884649 iscsid[730]: iscsid shutting down. Jul 15 11:33:09.879915 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 11:33:09.879995 systemd[1]: Stopped ignition-files.service. Jul 15 11:33:09.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.881919 systemd[1]: Stopping ignition-mount.service... Jul 15 11:33:09.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.892296 ignition[874]: INFO : Ignition 2.14.0 Jul 15 11:33:09.892296 ignition[874]: INFO : Stage: umount Jul 15 11:33:09.892296 ignition[874]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:33:09.892296 ignition[874]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:33:09.892296 ignition[874]: INFO : umount: umount passed Jul 15 11:33:09.892296 ignition[874]: INFO : Ignition finished successfully Jul 15 11:33:09.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.883259 systemd[1]: Stopping iscsid.service... Jul 15 11:33:09.884605 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 11:33:09.884832 systemd[1]: Stopped kmod-static-nodes.service. Jul 15 11:33:09.886858 systemd[1]: Stopping sysroot-boot.service... Jul 15 11:33:09.887726 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 11:33:09.887870 systemd[1]: Stopped systemd-udev-trigger.service. Jul 15 11:33:09.889351 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 11:33:09.889434 systemd[1]: Stopped dracut-pre-trigger.service. Jul 15 11:33:09.892278 systemd[1]: iscsid.service: Deactivated successfully. Jul 15 11:33:09.892354 systemd[1]: Stopped iscsid.service. Jul 15 11:33:09.907310 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 11:33:09.908559 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 11:33:09.909483 systemd[1]: Stopped ignition-mount.service. Jul 15 11:33:09.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.911358 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 11:33:09.911425 systemd[1]: Closed iscsid.socket. Jul 15 11:33:09.913545 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 11:33:09.913581 systemd[1]: Stopped ignition-disks.service. Jul 15 11:33:09.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.916176 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 11:33:09.916214 systemd[1]: Stopped ignition-kargs.service. Jul 15 11:33:09.918637 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 11:33:09.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.918674 systemd[1]: Stopped ignition-setup.service. Jul 15 11:33:09.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.921153 systemd[1]: Stopping iscsiuio.service... Jul 15 11:33:09.922736 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 11:33:09.923721 systemd[1]: Finished initrd-cleanup.service. Jul 15 11:33:09.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.925481 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 15 11:33:09.926400 systemd[1]: Stopped iscsiuio.service. Jul 15 11:33:09.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.928483 systemd[1]: Stopped target network.target. Jul 15 11:33:09.929977 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 11:33:09.930951 systemd[1]: Closed iscsiuio.socket. Jul 15 11:33:09.932393 systemd[1]: Stopping systemd-networkd.service... Jul 15 11:33:09.934062 systemd[1]: Stopping systemd-resolved.service... Jul 15 11:33:09.939043 systemd-networkd[720]: eth0: DHCPv6 lease lost Jul 15 11:33:09.940179 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 11:33:09.941235 systemd[1]: Stopped systemd-networkd.service. Jul 15 11:33:09.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.943260 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 11:33:09.943290 systemd[1]: Closed systemd-networkd.socket. Jul 15 11:33:09.946000 audit: BPF prog-id=9 op=UNLOAD Jul 15 11:33:09.946458 systemd[1]: Stopping network-cleanup.service... Jul 15 11:33:09.947860 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 11:33:09.947917 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 15 11:33:09.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.949812 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 11:33:09.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.949848 systemd[1]: Stopped systemd-sysctl.service. Jul 15 11:33:09.952342 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 11:33:09.952400 systemd[1]: Stopped systemd-modules-load.service. Jul 15 11:33:09.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.954885 systemd[1]: Stopping systemd-udevd.service... Jul 15 11:33:09.959046 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 11:33:09.959446 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 11:33:09.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.959520 systemd[1]: Stopped systemd-resolved.service. Jul 15 11:33:09.963484 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 11:33:09.964429 systemd[1]: Stopped systemd-udevd.service. Jul 15 11:33:09.964000 audit: BPF prog-id=6 op=UNLOAD Jul 15 11:33:09.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.966900 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 11:33:09.966942 systemd[1]: Closed systemd-udevd-control.socket. Jul 15 11:33:09.969629 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 11:33:09.969658 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 15 11:33:09.972149 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 11:33:09.973081 systemd[1]: Stopped dracut-pre-udev.service. Jul 15 11:33:09.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.974638 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 11:33:09.974668 systemd[1]: Stopped dracut-cmdline.service. Jul 15 11:33:09.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.977018 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 11:33:09.977060 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 15 11:33:09.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.980084 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 15 11:33:09.981838 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 11:33:09.981881 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 15 11:33:09.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.984927 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 11:33:09.985893 systemd[1]: Stopped network-cleanup.service. Jul 15 11:33:09.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.987616 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 11:33:09.988659 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 15 11:33:09.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.990550 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 11:33:09.991488 systemd[1]: Stopped sysroot-boot.service. Jul 15 11:33:09.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.993115 systemd[1]: Reached target initrd-switch-root.target. Jul 15 11:33:09.994807 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 11:33:09.994848 systemd[1]: Stopped initrd-setup-root.service. Jul 15 11:33:09.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:09.997975 systemd[1]: Starting initrd-switch-root.service... Jul 15 11:33:10.014090 systemd[1]: Switching root. Jul 15 11:33:10.031646 systemd-journald[198]: Journal stopped Jul 15 11:33:12.715078 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Jul 15 11:33:12.715136 kernel: SELinux: Class mctp_socket not defined in policy. Jul 15 11:33:12.715150 kernel: SELinux: Class anon_inode not defined in policy. Jul 15 11:33:12.715160 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 15 11:33:12.715170 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 11:33:12.715179 kernel: SELinux: policy capability open_perms=1 Jul 15 11:33:12.715194 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 11:33:12.715203 kernel: SELinux: policy capability always_check_network=0 Jul 15 11:33:12.715213 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 11:33:12.715222 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 11:33:12.715233 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 11:33:12.715243 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 11:33:12.715255 systemd[1]: Successfully loaded SELinux policy in 36.640ms. Jul 15 11:33:12.715270 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.745ms. Jul 15 11:33:12.715282 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 15 11:33:12.715293 systemd[1]: Detected virtualization kvm. Jul 15 11:33:12.715303 systemd[1]: Detected architecture x86-64. Jul 15 11:33:12.715313 systemd[1]: Detected first boot. Jul 15 11:33:12.715323 systemd[1]: Initializing machine ID from VM UUID. Jul 15 11:33:12.715335 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 15 11:33:12.715344 systemd[1]: Populated /etc with preset unit settings. Jul 15 11:33:12.715355 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:33:12.715368 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:33:12.715379 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:33:12.715390 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 11:33:12.715400 systemd[1]: Stopped initrd-switch-root.service. Jul 15 11:33:12.715410 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 11:33:12.715422 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 15 11:33:12.715432 systemd[1]: Created slice system-addon\x2drun.slice. Jul 15 11:33:12.715442 systemd[1]: Created slice system-getty.slice. Jul 15 11:33:12.715452 systemd[1]: Created slice system-modprobe.slice. Jul 15 11:33:12.715462 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 15 11:33:12.715472 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 15 11:33:12.715484 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 15 11:33:12.715497 systemd[1]: Created slice user.slice. Jul 15 11:33:12.715507 systemd[1]: Started systemd-ask-password-console.path. Jul 15 11:33:12.715518 systemd[1]: Started systemd-ask-password-wall.path. Jul 15 11:33:12.715528 systemd[1]: Set up automount boot.automount. Jul 15 11:33:12.715538 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 15 11:33:12.715548 systemd[1]: Stopped target initrd-switch-root.target. Jul 15 11:33:12.715557 systemd[1]: Stopped target initrd-fs.target. Jul 15 11:33:12.715567 systemd[1]: Stopped target initrd-root-fs.target. Jul 15 11:33:12.715577 systemd[1]: Reached target integritysetup.target. Jul 15 11:33:12.715587 systemd[1]: Reached target remote-cryptsetup.target. Jul 15 11:33:12.715598 systemd[1]: Reached target remote-fs.target. Jul 15 11:33:12.715608 systemd[1]: Reached target slices.target. Jul 15 11:33:12.715618 systemd[1]: Reached target swap.target. Jul 15 11:33:12.715628 systemd[1]: Reached target torcx.target. Jul 15 11:33:12.715638 systemd[1]: Reached target veritysetup.target. Jul 15 11:33:12.715647 systemd[1]: Listening on systemd-coredump.socket. Jul 15 11:33:12.715657 systemd[1]: Listening on systemd-initctl.socket. Jul 15 11:33:12.715667 systemd[1]: Listening on systemd-networkd.socket. Jul 15 11:33:12.715677 systemd[1]: Listening on systemd-udevd-control.socket. Jul 15 11:33:12.715688 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 15 11:33:12.715699 systemd[1]: Listening on systemd-userdbd.socket. Jul 15 11:33:12.715709 systemd[1]: Mounting dev-hugepages.mount... Jul 15 11:33:12.715721 systemd[1]: Mounting dev-mqueue.mount... Jul 15 11:33:12.715732 systemd[1]: Mounting media.mount... Jul 15 11:33:12.715742 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:33:12.715752 systemd[1]: Mounting sys-kernel-debug.mount... Jul 15 11:33:12.715762 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 15 11:33:12.715772 systemd[1]: Mounting tmp.mount... Jul 15 11:33:12.715783 systemd[1]: Starting flatcar-tmpfiles.service... Jul 15 11:33:12.715793 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:33:12.715803 systemd[1]: Starting kmod-static-nodes.service... Jul 15 11:33:12.715813 systemd[1]: Starting modprobe@configfs.service... Jul 15 11:33:12.715823 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:33:12.715833 systemd[1]: Starting modprobe@drm.service... Jul 15 11:33:12.715843 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:33:12.715853 systemd[1]: Starting modprobe@fuse.service... Jul 15 11:33:12.715863 systemd[1]: Starting modprobe@loop.service... Jul 15 11:33:12.715874 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 11:33:12.715884 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 11:33:12.715894 systemd[1]: Stopped systemd-fsck-root.service. Jul 15 11:33:12.715903 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 11:33:12.715915 kernel: loop: module loaded Jul 15 11:33:12.715924 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 11:33:12.715935 systemd[1]: Stopped systemd-journald.service. Jul 15 11:33:12.715945 kernel: fuse: init (API version 7.34) Jul 15 11:33:12.715954 systemd[1]: Starting systemd-journald.service... Jul 15 11:33:12.715966 systemd[1]: Starting systemd-modules-load.service... Jul 15 11:33:12.715984 systemd[1]: Starting systemd-network-generator.service... Jul 15 11:33:12.715994 systemd[1]: Starting systemd-remount-fs.service... Jul 15 11:33:12.716014 systemd[1]: Starting systemd-udev-trigger.service... Jul 15 11:33:12.716025 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 11:33:12.716035 systemd[1]: Stopped verity-setup.service. Jul 15 11:33:12.716046 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:33:12.716059 systemd-journald[989]: Journal started Jul 15 11:33:12.716093 systemd-journald[989]: Runtime Journal (/run/log/journal/992d811e58d84463b11295c1ebf4a1ad) is 6.0M, max 48.5M, 42.5M free. Jul 15 11:33:10.087000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 11:33:10.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 15 11:33:10.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 15 11:33:10.526000 audit: BPF prog-id=10 op=LOAD Jul 15 11:33:10.526000 audit: BPF prog-id=10 op=UNLOAD Jul 15 11:33:10.526000 audit: BPF prog-id=11 op=LOAD Jul 15 11:33:10.526000 audit: BPF prog-id=11 op=UNLOAD Jul 15 11:33:10.556000 audit[908]: AVC avc: denied { associate } for pid=908 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 15 11:33:10.556000 audit[908]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001878e4 a1=c00002ae40 a2=c000029080 a3=32 items=0 ppid=891 pid=908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:33:10.556000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 15 11:33:10.558000 audit[908]: AVC avc: denied { associate } for pid=908 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 15 11:33:10.558000 audit[908]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001879b9 a2=1ed a3=0 items=2 ppid=891 pid=908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:33:10.558000 audit: CWD cwd="/" Jul 15 11:33:10.558000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:10.558000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:10.558000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 15 11:33:12.593000 audit: BPF prog-id=12 op=LOAD Jul 15 11:33:12.593000 audit: BPF prog-id=3 op=UNLOAD Jul 15 11:33:12.593000 audit: BPF prog-id=13 op=LOAD Jul 15 11:33:12.594000 audit: BPF prog-id=14 op=LOAD Jul 15 11:33:12.594000 audit: BPF prog-id=4 op=UNLOAD Jul 15 11:33:12.594000 audit: BPF prog-id=5 op=UNLOAD Jul 15 11:33:12.594000 audit: BPF prog-id=15 op=LOAD Jul 15 11:33:12.594000 audit: BPF prog-id=12 op=UNLOAD Jul 15 11:33:12.594000 audit: BPF prog-id=16 op=LOAD Jul 15 11:33:12.595000 audit: BPF prog-id=17 op=LOAD Jul 15 11:33:12.595000 audit: BPF prog-id=13 op=UNLOAD Jul 15 11:33:12.595000 audit: BPF prog-id=14 op=UNLOAD Jul 15 11:33:12.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.613000 audit: BPF prog-id=15 op=UNLOAD Jul 15 11:33:12.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.697000 audit: BPF prog-id=18 op=LOAD Jul 15 11:33:12.697000 audit: BPF prog-id=19 op=LOAD Jul 15 11:33:12.698000 audit: BPF prog-id=20 op=LOAD Jul 15 11:33:12.698000 audit: BPF prog-id=16 op=UNLOAD Jul 15 11:33:12.698000 audit: BPF prog-id=17 op=UNLOAD Jul 15 11:33:12.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.714000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 15 11:33:12.714000 audit[989]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffd6f0e2820 a2=4000 a3=7ffd6f0e28bc items=0 ppid=1 pid=989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:33:12.714000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 15 11:33:12.592586 systemd[1]: Queued start job for default target multi-user.target. Jul 15 11:33:10.554850 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:33:12.592596 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 15 11:33:10.555068 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:10Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 15 11:33:12.595516 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 11:33:10.555085 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:10Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 15 11:33:10.555110 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:10Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 15 11:33:10.555120 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:10Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 15 11:33:10.555148 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:10Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 15 11:33:10.555159 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:10Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 15 11:33:10.555339 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:10Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 15 11:33:10.555371 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:10Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 15 11:33:10.555382 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:10Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 15 11:33:10.556100 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:10Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 15 11:33:10.556142 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:10Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 15 11:33:10.556170 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:10Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.100: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.100 Jul 15 11:33:10.556184 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:10Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 15 11:33:10.556204 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:10Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.100: no such file or directory" path=/var/lib/torcx/store/3510.3.100 Jul 15 11:33:10.556218 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:10Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 15 11:33:12.343878 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:12Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 15 11:33:12.344143 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:12Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 15 11:33:12.344234 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:12Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 15 11:33:12.344379 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:12Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 15 11:33:12.344428 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:12Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 15 11:33:12.344480 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-07-15T11:33:12Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 15 11:33:12.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.719016 systemd[1]: Started systemd-journald.service. Jul 15 11:33:12.719350 systemd[1]: Mounted dev-hugepages.mount. Jul 15 11:33:12.720224 systemd[1]: Mounted dev-mqueue.mount. Jul 15 11:33:12.721040 systemd[1]: Mounted media.mount. Jul 15 11:33:12.721791 systemd[1]: Mounted sys-kernel-debug.mount. Jul 15 11:33:12.722660 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 15 11:33:12.723573 systemd[1]: Mounted tmp.mount. Jul 15 11:33:12.724533 systemd[1]: Finished flatcar-tmpfiles.service. Jul 15 11:33:12.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.725621 systemd[1]: Finished kmod-static-nodes.service. Jul 15 11:33:12.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.726684 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 11:33:12.726826 systemd[1]: Finished modprobe@configfs.service. Jul 15 11:33:12.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.727875 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:33:12.728065 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:33:12.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.729208 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 11:33:12.729408 systemd[1]: Finished modprobe@drm.service. Jul 15 11:33:12.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.730625 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:33:12.730857 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:33:12.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.732207 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 11:33:12.732399 systemd[1]: Finished modprobe@fuse.service. Jul 15 11:33:12.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.733544 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:33:12.733678 systemd[1]: Finished modprobe@loop.service. Jul 15 11:33:12.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.734754 systemd[1]: Finished systemd-modules-load.service. Jul 15 11:33:12.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.735891 systemd[1]: Finished systemd-network-generator.service. Jul 15 11:33:12.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.737104 systemd[1]: Finished systemd-remount-fs.service. Jul 15 11:33:12.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.738406 systemd[1]: Reached target network-pre.target. Jul 15 11:33:12.740295 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 15 11:33:12.742059 systemd[1]: Mounting sys-kernel-config.mount... Jul 15 11:33:12.743176 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 11:33:12.744289 systemd[1]: Starting systemd-hwdb-update.service... Jul 15 11:33:12.746073 systemd[1]: Starting systemd-journal-flush.service... Jul 15 11:33:12.747230 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:33:12.748101 systemd[1]: Starting systemd-random-seed.service... Jul 15 11:33:12.749227 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:33:12.750136 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:33:12.751202 systemd-journald[989]: Time spent on flushing to /var/log/journal/992d811e58d84463b11295c1ebf4a1ad is 12.880ms for 1102 entries. Jul 15 11:33:12.751202 systemd-journald[989]: System Journal (/var/log/journal/992d811e58d84463b11295c1ebf4a1ad) is 8.0M, max 195.6M, 187.6M free. Jul 15 11:33:12.804457 systemd-journald[989]: Received client request to flush runtime journal. Jul 15 11:33:12.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:12.751921 systemd[1]: Starting systemd-sysusers.service... Jul 15 11:33:12.755040 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 15 11:33:12.804881 udevadm[1011]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 15 11:33:12.756281 systemd[1]: Mounted sys-kernel-config.mount. Jul 15 11:33:12.757313 systemd[1]: Finished systemd-udev-trigger.service. Jul 15 11:33:12.759157 systemd[1]: Starting systemd-udev-settle.service... Jul 15 11:33:12.769384 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:33:12.771546 systemd[1]: Finished systemd-sysusers.service. Jul 15 11:33:12.790481 systemd[1]: Finished systemd-random-seed.service. Jul 15 11:33:12.791479 systemd[1]: Reached target first-boot-complete.target. Jul 15 11:33:12.805332 systemd[1]: Finished systemd-journal-flush.service. Jul 15 11:33:12.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:13.184353 systemd[1]: Finished systemd-hwdb-update.service. Jul 15 11:33:13.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:13.185000 audit: BPF prog-id=21 op=LOAD Jul 15 11:33:13.185000 audit: BPF prog-id=22 op=LOAD Jul 15 11:33:13.185000 audit: BPF prog-id=7 op=UNLOAD Jul 15 11:33:13.185000 audit: BPF prog-id=8 op=UNLOAD Jul 15 11:33:13.186570 systemd[1]: Starting systemd-udevd.service... Jul 15 11:33:13.201402 systemd-udevd[1014]: Using default interface naming scheme 'v252'. Jul 15 11:33:13.213538 systemd[1]: Started systemd-udevd.service. Jul 15 11:33:13.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:13.217000 audit: BPF prog-id=23 op=LOAD Jul 15 11:33:13.218637 systemd[1]: Starting systemd-networkd.service... Jul 15 11:33:13.222000 audit: BPF prog-id=24 op=LOAD Jul 15 11:33:13.223000 audit: BPF prog-id=25 op=LOAD Jul 15 11:33:13.223000 audit: BPF prog-id=26 op=LOAD Jul 15 11:33:13.223650 systemd[1]: Starting systemd-userdbd.service... Jul 15 11:33:13.236996 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 15 11:33:13.248407 systemd[1]: Started systemd-userdbd.service. Jul 15 11:33:13.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:13.263957 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 15 11:33:13.278040 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 15 11:33:13.284040 kernel: ACPI: button: Power Button [PWRF] Jul 15 11:33:13.289567 systemd-networkd[1033]: lo: Link UP Jul 15 11:33:13.289808 systemd-networkd[1033]: lo: Gained carrier Jul 15 11:33:13.290264 systemd-networkd[1033]: Enumeration completed Jul 15 11:33:13.290422 systemd[1]: Started systemd-networkd.service. Jul 15 11:33:13.290649 systemd-networkd[1033]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 11:33:13.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:13.291832 systemd-networkd[1033]: eth0: Link UP Jul 15 11:33:13.291904 systemd-networkd[1033]: eth0: Gained carrier Jul 15 11:33:13.296000 audit[1028]: AVC avc: denied { confidentiality } for pid=1028 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 15 11:33:13.296000 audit[1028]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55afbfa480e0 a1=338ac a2=7f80f0d5abc5 a3=5 items=110 ppid=1014 pid=1028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:33:13.296000 audit: CWD cwd="/" Jul 15 11:33:13.296000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=1 name=(null) inode=13297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=2 name=(null) inode=13297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=3 name=(null) inode=13298 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=4 name=(null) inode=13297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=5 name=(null) inode=13299 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=6 name=(null) inode=13297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=7 name=(null) inode=13300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=8 name=(null) inode=13300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=9 name=(null) inode=13301 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=10 name=(null) inode=13300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=11 name=(null) inode=13302 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=12 name=(null) inode=13300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=13 name=(null) inode=13303 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=14 name=(null) inode=13300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=15 name=(null) inode=13304 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=16 name=(null) inode=13300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=17 name=(null) inode=13305 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=18 name=(null) inode=13297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=19 name=(null) inode=13306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=20 name=(null) inode=13306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=21 name=(null) inode=13307 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=22 name=(null) inode=13306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=23 name=(null) inode=13308 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=24 name=(null) inode=13306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=25 name=(null) inode=13309 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=26 name=(null) inode=13306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=27 name=(null) inode=13310 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=28 name=(null) inode=13306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=29 name=(null) inode=13311 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=30 name=(null) inode=13297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=31 name=(null) inode=13312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=32 name=(null) inode=13312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=33 name=(null) inode=15361 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=34 name=(null) inode=13312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=35 name=(null) inode=15362 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=36 name=(null) inode=13312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=37 name=(null) inode=15363 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=38 name=(null) inode=13312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=39 name=(null) inode=15364 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=40 name=(null) inode=13312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=41 name=(null) inode=15365 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=42 name=(null) inode=13297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=43 name=(null) inode=15366 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=44 name=(null) inode=15366 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=45 name=(null) inode=15367 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=46 name=(null) inode=15366 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=47 name=(null) inode=15368 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=48 name=(null) inode=15366 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=49 name=(null) inode=15369 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=50 name=(null) inode=15366 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=51 name=(null) inode=15370 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=52 name=(null) inode=15366 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=53 name=(null) inode=15371 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=55 name=(null) inode=15372 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=56 name=(null) inode=15372 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=57 name=(null) inode=15373 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=58 name=(null) inode=15372 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=59 name=(null) inode=15374 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=60 name=(null) inode=15372 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=61 name=(null) inode=15375 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=62 name=(null) inode=15375 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=63 name=(null) inode=15376 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=64 name=(null) inode=15375 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=65 name=(null) inode=15377 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=66 name=(null) inode=15375 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=67 name=(null) inode=15378 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=68 name=(null) inode=15375 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=69 name=(null) inode=15379 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=70 name=(null) inode=15375 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=71 name=(null) inode=15380 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=72 name=(null) inode=15372 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=73 name=(null) inode=15381 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=74 name=(null) inode=15381 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=75 name=(null) inode=15382 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=76 name=(null) inode=15381 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=77 name=(null) inode=15383 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=78 name=(null) inode=15381 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=79 name=(null) inode=15384 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=80 name=(null) inode=15381 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=81 name=(null) inode=15385 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=82 name=(null) inode=15381 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=83 name=(null) inode=15386 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=84 name=(null) inode=15372 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=85 name=(null) inode=15387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=86 name=(null) inode=15387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=87 name=(null) inode=15388 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=88 name=(null) inode=15387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=89 name=(null) inode=15389 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=90 name=(null) inode=15387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=91 name=(null) inode=15390 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=92 name=(null) inode=15387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=93 name=(null) inode=15391 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=94 name=(null) inode=15387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=95 name=(null) inode=15392 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=96 name=(null) inode=15372 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=97 name=(null) inode=15393 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=98 name=(null) inode=15393 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=99 name=(null) inode=15394 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=100 name=(null) inode=15393 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=101 name=(null) inode=15395 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=102 name=(null) inode=15393 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=103 name=(null) inode=15396 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=104 name=(null) inode=15393 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=105 name=(null) inode=15397 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=106 name=(null) inode=15393 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=107 name=(null) inode=15398 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PATH item=109 name=(null) inode=15399 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:13.296000 audit: PROCTITLE proctitle="(udev-worker)" Jul 15 11:33:13.304120 systemd-networkd[1033]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 11:33:13.322053 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 15 11:33:13.322332 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 15 11:33:13.322442 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 15 11:33:13.324034 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 15 11:33:13.329039 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 11:33:13.387149 kernel: kvm: Nested Virtualization enabled Jul 15 11:33:13.387232 kernel: SVM: kvm: Nested Paging enabled Jul 15 11:33:13.387246 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 15 11:33:13.388300 kernel: SVM: Virtual GIF supported Jul 15 11:33:13.405033 kernel: EDAC MC: Ver: 3.0.0 Jul 15 11:33:13.431402 systemd[1]: Finished systemd-udev-settle.service. Jul 15 11:33:13.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:13.433390 systemd[1]: Starting lvm2-activation-early.service... Jul 15 11:33:13.440789 lvm[1050]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 15 11:33:13.473631 systemd[1]: Finished lvm2-activation-early.service. Jul 15 11:33:13.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:13.474598 systemd[1]: Reached target cryptsetup.target. Jul 15 11:33:13.476170 systemd[1]: Starting lvm2-activation.service... Jul 15 11:33:13.479512 lvm[1051]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 15 11:33:13.503181 systemd[1]: Finished lvm2-activation.service. Jul 15 11:33:13.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:13.504167 systemd[1]: Reached target local-fs-pre.target. Jul 15 11:33:13.505045 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 11:33:13.505070 systemd[1]: Reached target local-fs.target. Jul 15 11:33:13.505870 systemd[1]: Reached target machines.target. Jul 15 11:33:13.507664 systemd[1]: Starting ldconfig.service... Jul 15 11:33:13.508643 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:33:13.508677 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:33:13.509492 systemd[1]: Starting systemd-boot-update.service... Jul 15 11:33:13.511214 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 15 11:33:13.513158 systemd[1]: Starting systemd-machine-id-commit.service... Jul 15 11:33:13.515217 systemd[1]: Starting systemd-sysext.service... Jul 15 11:33:13.517063 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1053 (bootctl) Jul 15 11:33:13.518077 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 15 11:33:13.521613 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 15 11:33:13.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:13.526714 systemd[1]: Unmounting usr-share-oem.mount... Jul 15 11:33:13.531648 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 15 11:33:13.531851 systemd[1]: Unmounted usr-share-oem.mount. Jul 15 11:33:13.542029 kernel: loop0: detected capacity change from 0 to 229808 Jul 15 11:33:13.828644 systemd-fsck[1061]: fsck.fat 4.2 (2021-01-31) Jul 15 11:33:13.828644 systemd-fsck[1061]: /dev/vda1: 790 files, 120725/258078 clusters Jul 15 11:33:13.829742 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 15 11:33:13.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:13.833414 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 11:33:13.832596 systemd[1]: Mounting boot.mount... Jul 15 11:33:13.839474 systemd[1]: Mounted boot.mount. Jul 15 11:33:13.846877 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 11:33:13.847464 systemd[1]: Finished systemd-machine-id-commit.service. Jul 15 11:33:13.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:13.850166 kernel: loop1: detected capacity change from 0 to 229808 Jul 15 11:33:13.853393 systemd[1]: Finished systemd-boot-update.service. Jul 15 11:33:13.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:13.855324 (sd-sysext)[1066]: Using extensions 'kubernetes'. Jul 15 11:33:13.855693 (sd-sysext)[1066]: Merged extensions into '/usr'. Jul 15 11:33:13.871618 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:33:13.873111 systemd[1]: Mounting usr-share-oem.mount... Jul 15 11:33:13.873967 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:33:13.875200 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:33:13.877111 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:33:13.878778 systemd[1]: Starting modprobe@loop.service... Jul 15 11:33:13.879626 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:33:13.879720 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:33:13.879811 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:33:13.881999 systemd[1]: Mounted usr-share-oem.mount. Jul 15 11:33:13.883220 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:33:13.883314 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:33:13.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:13.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:13.884370 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:33:13.884457 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:33:13.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:13.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:13.885574 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:33:13.885663 systemd[1]: Finished modprobe@loop.service. Jul 15 11:33:13.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:13.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:13.886727 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:33:13.886811 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:33:13.887632 systemd[1]: Finished systemd-sysext.service. Jul 15 11:33:13.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:13.889436 systemd[1]: Starting ensure-sysext.service... Jul 15 11:33:13.891173 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 15 11:33:13.896033 systemd[1]: Reloading. Jul 15 11:33:13.904466 systemd-tmpfiles[1073]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 15 11:33:13.906311 systemd-tmpfiles[1073]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 11:33:13.907366 ldconfig[1052]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 11:33:13.909487 systemd-tmpfiles[1073]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 11:33:13.943923 /usr/lib/systemd/system-generators/torcx-generator[1093]: time="2025-07-15T11:33:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:33:13.943958 /usr/lib/systemd/system-generators/torcx-generator[1093]: time="2025-07-15T11:33:13Z" level=info msg="torcx already run" Jul 15 11:33:14.004148 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:33:14.004164 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:33:14.020776 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:33:14.071000 audit: BPF prog-id=27 op=LOAD Jul 15 11:33:14.071000 audit: BPF prog-id=18 op=UNLOAD Jul 15 11:33:14.071000 audit: BPF prog-id=28 op=LOAD Jul 15 11:33:14.071000 audit: BPF prog-id=29 op=LOAD Jul 15 11:33:14.071000 audit: BPF prog-id=19 op=UNLOAD Jul 15 11:33:14.071000 audit: BPF prog-id=20 op=UNLOAD Jul 15 11:33:14.072000 audit: BPF prog-id=30 op=LOAD Jul 15 11:33:14.072000 audit: BPF prog-id=31 op=LOAD Jul 15 11:33:14.072000 audit: BPF prog-id=21 op=UNLOAD Jul 15 11:33:14.072000 audit: BPF prog-id=22 op=UNLOAD Jul 15 11:33:14.073000 audit: BPF prog-id=32 op=LOAD Jul 15 11:33:14.073000 audit: BPF prog-id=23 op=UNLOAD Jul 15 11:33:14.074000 audit: BPF prog-id=33 op=LOAD Jul 15 11:33:14.074000 audit: BPF prog-id=24 op=UNLOAD Jul 15 11:33:14.074000 audit: BPF prog-id=34 op=LOAD Jul 15 11:33:14.074000 audit: BPF prog-id=35 op=LOAD Jul 15 11:33:14.074000 audit: BPF prog-id=25 op=UNLOAD Jul 15 11:33:14.074000 audit: BPF prog-id=26 op=UNLOAD Jul 15 11:33:14.076823 systemd[1]: Finished ldconfig.service. Jul 15 11:33:14.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:14.078648 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 15 11:33:14.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:14.082308 systemd[1]: Starting audit-rules.service... Jul 15 11:33:14.083896 systemd[1]: Starting clean-ca-certificates.service... Jul 15 11:33:14.085644 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 15 11:33:14.087000 audit: BPF prog-id=36 op=LOAD Jul 15 11:33:14.088053 systemd[1]: Starting systemd-resolved.service... Jul 15 11:33:14.089000 audit: BPF prog-id=37 op=LOAD Jul 15 11:33:14.089994 systemd[1]: Starting systemd-timesyncd.service... Jul 15 11:33:14.091534 systemd[1]: Starting systemd-update-utmp.service... Jul 15 11:33:14.094543 systemd[1]: Finished clean-ca-certificates.service. Jul 15 11:33:14.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:14.095000 audit[1145]: SYSTEM_BOOT pid=1145 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 15 11:33:14.098663 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:33:14.099794 systemd[1]: Finished systemd-update-utmp.service. Jul 15 11:33:14.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:14.103379 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 15 11:33:14.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:14.104612 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:33:14.104800 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:33:14.105899 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:33:14.107548 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:33:14.109071 systemd[1]: Starting modprobe@loop.service... Jul 15 11:33:14.109785 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:33:14.109884 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:33:14.110814 systemd[1]: Starting systemd-update-done.service... Jul 15 11:33:14.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:14.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:14.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:14.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:14.111681 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:33:14.111764 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:33:14.112616 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:33:14.112718 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:33:14.113825 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:33:14.113918 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:33:14.115059 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:33:14.115143 systemd[1]: Finished modprobe@loop.service. Jul 15 11:33:14.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:14.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:14.116220 systemd[1]: Finished systemd-update-done.service. Jul 15 11:33:14.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:14.117372 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:33:14.117470 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:33:14.118000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 15 11:33:14.118000 audit[1160]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe9d314c30 a2=420 a3=0 items=0 ppid=1135 pid=1160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:33:14.118413 augenrules[1160]: No rules Jul 15 11:33:14.118000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 15 11:33:14.118848 systemd[1]: Finished audit-rules.service. Jul 15 11:33:14.119800 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:33:14.119976 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:33:14.120931 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:33:14.122674 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:33:14.124538 systemd[1]: Starting modprobe@loop.service... Jul 15 11:33:14.125291 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:33:14.125428 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:33:14.125558 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:33:14.125669 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:33:14.126886 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:33:14.127054 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:33:14.128369 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:33:14.128506 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:33:14.129762 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:33:14.129898 systemd[1]: Finished modprobe@loop.service. Jul 15 11:33:14.131072 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:33:14.131191 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:33:14.134838 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:33:14.135395 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:33:14.136761 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:33:14.138549 systemd[1]: Starting modprobe@drm.service... Jul 15 11:33:14.140356 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:33:14.142347 systemd[1]: Starting modprobe@loop.service... Jul 15 11:33:14.143178 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:33:14.143331 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:33:14.144707 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 15 11:33:14.145726 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:33:14.145865 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:33:14.147630 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:33:14.147770 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:33:14.148998 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 11:33:14.149111 systemd-resolved[1141]: Positive Trust Anchors: Jul 15 11:33:14.149124 systemd-resolved[1141]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 11:33:14.149150 systemd-resolved[1141]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 15 11:33:14.149172 systemd[1]: Finished modprobe@drm.service. Jul 15 11:33:14.150486 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:33:14.150618 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:33:14.151844 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:33:14.151946 systemd[1]: Finished modprobe@loop.service. Jul 15 11:33:14.153224 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:33:14.153307 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:33:14.154608 systemd[1]: Finished ensure-sysext.service. Jul 15 11:33:14.157221 systemd-resolved[1141]: Defaulting to hostname 'linux'. Jul 15 11:33:14.158753 systemd[1]: Started systemd-resolved.service. Jul 15 11:33:14.159616 systemd[1]: Reached target network.target. Jul 15 11:33:14.160369 systemd[1]: Reached target nss-lookup.target. Jul 15 11:33:14.169291 systemd[1]: Started systemd-timesyncd.service. Jul 15 11:33:14.170172 systemd[1]: Reached target sysinit.target. Jul 15 11:33:14.170997 systemd[1]: Started motdgen.path. Jul 15 11:33:14.171699 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 15 11:33:14.629726 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 15 11:33:14.629765 systemd-resolved[1141]: Clock change detected. Flushing caches. Jul 15 11:33:14.630541 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 11:33:14.630563 systemd[1]: Reached target paths.target. Jul 15 11:33:14.630565 systemd-timesyncd[1142]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 11:33:14.631282 systemd[1]: Reached target time-set.target. Jul 15 11:33:14.631302 systemd-timesyncd[1142]: Initial clock synchronization to Tue 2025-07-15 11:33:14.629721 UTC. Jul 15 11:33:14.632176 systemd[1]: Started logrotate.timer. Jul 15 11:33:14.632966 systemd[1]: Started mdadm.timer. Jul 15 11:33:14.633589 systemd[1]: Reached target timers.target. Jul 15 11:33:14.634569 systemd[1]: Listening on dbus.socket. Jul 15 11:33:14.636164 systemd[1]: Starting docker.socket... Jul 15 11:33:14.638797 systemd[1]: Listening on sshd.socket. Jul 15 11:33:14.639691 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:33:14.640089 systemd[1]: Listening on docker.socket. Jul 15 11:33:14.641037 systemd[1]: Reached target sockets.target. Jul 15 11:33:14.641830 systemd[1]: Reached target basic.target. Jul 15 11:33:14.642639 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 15 11:33:14.642663 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 15 11:33:14.643441 systemd[1]: Starting containerd.service... Jul 15 11:33:14.645003 systemd[1]: Starting dbus.service... Jul 15 11:33:14.646501 systemd[1]: Starting enable-oem-cloudinit.service... Jul 15 11:33:14.648264 systemd[1]: Starting extend-filesystems.service... Jul 15 11:33:14.649302 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 15 11:33:14.650160 systemd[1]: Starting motdgen.service... Jul 15 11:33:14.650530 jq[1178]: false Jul 15 11:33:14.651685 systemd[1]: Starting prepare-helm.service... Jul 15 11:33:14.653331 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 15 11:33:14.655030 systemd[1]: Starting sshd-keygen.service... Jul 15 11:33:14.658119 systemd[1]: Starting systemd-logind.service... Jul 15 11:33:14.658897 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:33:14.658942 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 11:33:14.660907 extend-filesystems[1179]: Found loop1 Jul 15 11:33:14.660907 extend-filesystems[1179]: Found sr0 Jul 15 11:33:14.660907 extend-filesystems[1179]: Found vda Jul 15 11:33:14.660072 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 11:33:14.665016 extend-filesystems[1179]: Found vda1 Jul 15 11:33:14.665016 extend-filesystems[1179]: Found vda2 Jul 15 11:33:14.665016 extend-filesystems[1179]: Found vda3 Jul 15 11:33:14.665016 extend-filesystems[1179]: Found usr Jul 15 11:33:14.665016 extend-filesystems[1179]: Found vda4 Jul 15 11:33:14.665016 extend-filesystems[1179]: Found vda6 Jul 15 11:33:14.665016 extend-filesystems[1179]: Found vda7 Jul 15 11:33:14.665016 extend-filesystems[1179]: Found vda9 Jul 15 11:33:14.665016 extend-filesystems[1179]: Checking size of /dev/vda9 Jul 15 11:33:14.660592 systemd[1]: Starting update-engine.service... Jul 15 11:33:14.672731 dbus-daemon[1177]: [system] SELinux support is enabled Jul 15 11:33:14.664532 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 15 11:33:14.669611 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 11:33:14.669753 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 15 11:33:14.671434 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 11:33:14.671553 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 15 11:33:14.676595 systemd[1]: Started dbus.service. Jul 15 11:33:14.679296 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 11:33:14.679493 systemd[1]: Finished motdgen.service. Jul 15 11:33:14.682921 jq[1198]: true Jul 15 11:33:14.685802 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 11:33:14.686350 jq[1203]: true Jul 15 11:33:14.686077 systemd[1]: Reached target system-config.target. Jul 15 11:33:14.687038 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 11:33:14.687059 systemd[1]: Reached target user-config.target. Jul 15 11:33:14.691202 tar[1201]: linux-amd64/LICENSE Jul 15 11:33:14.691363 tar[1201]: linux-amd64/helm Jul 15 11:33:14.701665 extend-filesystems[1179]: Resized partition /dev/vda9 Jul 15 11:33:14.703989 extend-filesystems[1222]: resize2fs 1.46.5 (30-Dec-2021) Jul 15 11:33:14.709398 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 11:33:14.720199 update_engine[1192]: I0715 11:33:14.720067 1192 main.cc:92] Flatcar Update Engine starting Jul 15 11:33:14.721635 systemd[1]: Started update-engine.service. Jul 15 11:33:14.722833 update_engine[1192]: I0715 11:33:14.721689 1192 update_check_scheduler.cc:74] Next update check in 7m54s Jul 15 11:33:14.727417 systemd[1]: Started locksmithd.service. Jul 15 11:33:14.729677 env[1204]: time="2025-07-15T11:33:14.729580664Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 15 11:33:14.732413 systemd-logind[1189]: Watching system buttons on /dev/input/event1 (Power Button) Jul 15 11:33:14.732437 systemd-logind[1189]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 11:33:14.734258 systemd-logind[1189]: New seat seat0. Jul 15 11:33:14.737904 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 11:33:14.739875 systemd[1]: Started systemd-logind.service. Jul 15 11:33:14.750045 env[1204]: time="2025-07-15T11:33:14.750013936Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 15 11:33:14.759261 env[1204]: time="2025-07-15T11:33:14.759230087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:33:14.762190 extend-filesystems[1222]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 11:33:14.762190 extend-filesystems[1222]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 11:33:14.762190 extend-filesystems[1222]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 11:33:14.760460 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 11:33:14.769266 env[1204]: time="2025-07-15T11:33:14.768337693Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.188-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:33:14.769266 env[1204]: time="2025-07-15T11:33:14.768371697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:33:14.769266 env[1204]: time="2025-07-15T11:33:14.768580148Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:33:14.769266 env[1204]: time="2025-07-15T11:33:14.768596298Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 15 11:33:14.769266 env[1204]: time="2025-07-15T11:33:14.768609132Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 15 11:33:14.769266 env[1204]: time="2025-07-15T11:33:14.768618600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 15 11:33:14.769266 env[1204]: time="2025-07-15T11:33:14.768677931Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:33:14.769266 env[1204]: time="2025-07-15T11:33:14.768870693Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:33:14.769266 env[1204]: time="2025-07-15T11:33:14.768992150Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:33:14.769266 env[1204]: time="2025-07-15T11:33:14.769006337Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 15 11:33:14.769449 extend-filesystems[1179]: Resized filesystem in /dev/vda9 Jul 15 11:33:14.770583 bash[1230]: Updated "/home/core/.ssh/authorized_keys" Jul 15 11:33:14.760606 systemd[1]: Finished extend-filesystems.service. Jul 15 11:33:14.770849 env[1204]: time="2025-07-15T11:33:14.769047835Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 15 11:33:14.770849 env[1204]: time="2025-07-15T11:33:14.769058956Z" level=info msg="metadata content store policy set" policy=shared Jul 15 11:33:14.763724 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 15 11:33:14.776397 env[1204]: time="2025-07-15T11:33:14.774941315Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 15 11:33:14.776397 env[1204]: time="2025-07-15T11:33:14.774963406Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 15 11:33:14.776397 env[1204]: time="2025-07-15T11:33:14.774976441Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 15 11:33:14.776397 env[1204]: time="2025-07-15T11:33:14.775002249Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 15 11:33:14.776397 env[1204]: time="2025-07-15T11:33:14.775017758Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 15 11:33:14.776397 env[1204]: time="2025-07-15T11:33:14.775030272Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 15 11:33:14.776397 env[1204]: time="2025-07-15T11:33:14.775041413Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 15 11:33:14.776397 env[1204]: time="2025-07-15T11:33:14.775053956Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 15 11:33:14.776397 env[1204]: time="2025-07-15T11:33:14.775065418Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 15 11:33:14.776397 env[1204]: time="2025-07-15T11:33:14.775077280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 15 11:33:14.776397 env[1204]: time="2025-07-15T11:33:14.775090916Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 15 11:33:14.776397 env[1204]: time="2025-07-15T11:33:14.775101866Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 15 11:33:14.776397 env[1204]: time="2025-07-15T11:33:14.775170284Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 15 11:33:14.776397 env[1204]: time="2025-07-15T11:33:14.775228494Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 15 11:33:14.775903 locksmithd[1232]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 11:33:14.776817 env[1204]: time="2025-07-15T11:33:14.775425844Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 15 11:33:14.776817 env[1204]: time="2025-07-15T11:33:14.775445380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 15 11:33:14.776817 env[1204]: time="2025-07-15T11:33:14.775457122Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 15 11:33:14.776817 env[1204]: time="2025-07-15T11:33:14.775493661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 15 11:33:14.776817 env[1204]: time="2025-07-15T11:33:14.775504902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 15 11:33:14.776817 env[1204]: time="2025-07-15T11:33:14.775516283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 15 11:33:14.776817 env[1204]: time="2025-07-15T11:33:14.775525771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 15 11:33:14.776817 env[1204]: time="2025-07-15T11:33:14.775538054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 15 11:33:14.776817 env[1204]: time="2025-07-15T11:33:14.775548934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 15 11:33:14.776817 env[1204]: time="2025-07-15T11:33:14.775558933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 15 11:33:14.776817 env[1204]: time="2025-07-15T11:33:14.775568521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 15 11:33:14.776817 env[1204]: time="2025-07-15T11:33:14.775581135Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 15 11:33:14.776817 env[1204]: time="2025-07-15T11:33:14.775669500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 15 11:33:14.776817 env[1204]: time="2025-07-15T11:33:14.775683176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 15 11:33:14.776817 env[1204]: time="2025-07-15T11:33:14.775695259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 15 11:33:14.777119 env[1204]: time="2025-07-15T11:33:14.775705889Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 15 11:33:14.777119 env[1204]: time="2025-07-15T11:33:14.775717270Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 15 11:33:14.777119 env[1204]: time="2025-07-15T11:33:14.775727108Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 15 11:33:14.777119 env[1204]: time="2025-07-15T11:33:14.775742577Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 15 11:33:14.777119 env[1204]: time="2025-07-15T11:33:14.775774297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 15 11:33:14.777215 env[1204]: time="2025-07-15T11:33:14.775971597Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 15 11:33:14.777215 env[1204]: time="2025-07-15T11:33:14.776019507Z" level=info msg="Connect containerd service" Jul 15 11:33:14.777215 env[1204]: time="2025-07-15T11:33:14.776049272Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 15 11:33:14.777763 env[1204]: time="2025-07-15T11:33:14.777435772Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 11:33:14.778015 env[1204]: time="2025-07-15T11:33:14.778000251Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 11:33:14.778111 env[1204]: time="2025-07-15T11:33:14.778095259Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 11:33:14.778227 systemd[1]: Started containerd.service. Jul 15 11:33:14.779269 env[1204]: time="2025-07-15T11:33:14.779252499Z" level=info msg="containerd successfully booted in 0.056975s" Jul 15 11:33:14.780231 env[1204]: time="2025-07-15T11:33:14.779968661Z" level=info msg="Start subscribing containerd event" Jul 15 11:33:14.780375 env[1204]: time="2025-07-15T11:33:14.780351259Z" level=info msg="Start recovering state" Jul 15 11:33:14.780454 env[1204]: time="2025-07-15T11:33:14.780432802Z" level=info msg="Start event monitor" Jul 15 11:33:14.780495 env[1204]: time="2025-07-15T11:33:14.780462788Z" level=info msg="Start snapshots syncer" Jul 15 11:33:14.780495 env[1204]: time="2025-07-15T11:33:14.780477075Z" level=info msg="Start cni network conf syncer for default" Jul 15 11:33:14.780580 env[1204]: time="2025-07-15T11:33:14.780560792Z" level=info msg="Start streaming server" Jul 15 11:33:15.098767 tar[1201]: linux-amd64/README.md Jul 15 11:33:15.102475 systemd[1]: Finished prepare-helm.service. Jul 15 11:33:15.429637 sshd_keygen[1197]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 11:33:15.447416 systemd[1]: Finished sshd-keygen.service. Jul 15 11:33:15.449522 systemd[1]: Starting issuegen.service... Jul 15 11:33:15.454310 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 11:33:15.454434 systemd[1]: Finished issuegen.service. Jul 15 11:33:15.456268 systemd[1]: Starting systemd-user-sessions.service... Jul 15 11:33:15.461174 systemd[1]: Finished systemd-user-sessions.service. Jul 15 11:33:15.463084 systemd[1]: Started getty@tty1.service. Jul 15 11:33:15.464726 systemd[1]: Started serial-getty@ttyS0.service. Jul 15 11:33:15.465678 systemd[1]: Reached target getty.target. Jul 15 11:33:15.585995 systemd-networkd[1033]: eth0: Gained IPv6LL Jul 15 11:33:15.587472 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 15 11:33:15.588631 systemd[1]: Reached target network-online.target. Jul 15 11:33:15.590483 systemd[1]: Starting kubelet.service... Jul 15 11:33:16.212463 systemd[1]: Started kubelet.service. Jul 15 11:33:16.213593 systemd[1]: Reached target multi-user.target. Jul 15 11:33:16.215304 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 15 11:33:16.221401 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 15 11:33:16.221513 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 15 11:33:16.222550 systemd[1]: Startup finished in 602ms (kernel) + 5.328s (initrd) + 5.717s (userspace) = 11.647s. Jul 15 11:33:16.599898 kubelet[1258]: E0715 11:33:16.599834 1258 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:33:16.601451 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:33:16.601565 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:33:18.172824 systemd[1]: Created slice system-sshd.slice. Jul 15 11:33:18.173638 systemd[1]: Started sshd@0-10.0.0.91:22-10.0.0.1:41894.service. Jul 15 11:33:18.203930 sshd[1267]: Accepted publickey for core from 10.0.0.1 port 41894 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:33:18.205091 sshd[1267]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:33:18.212790 systemd-logind[1189]: New session 1 of user core. Jul 15 11:33:18.213531 systemd[1]: Created slice user-500.slice. Jul 15 11:33:18.214561 systemd[1]: Starting user-runtime-dir@500.service... Jul 15 11:33:18.221172 systemd[1]: Finished user-runtime-dir@500.service. Jul 15 11:33:18.222140 systemd[1]: Starting user@500.service... Jul 15 11:33:18.224535 (systemd)[1270]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:33:18.292365 systemd[1270]: Queued start job for default target default.target. Jul 15 11:33:18.292749 systemd[1270]: Reached target paths.target. Jul 15 11:33:18.292768 systemd[1270]: Reached target sockets.target. Jul 15 11:33:18.292779 systemd[1270]: Reached target timers.target. Jul 15 11:33:18.292789 systemd[1270]: Reached target basic.target. Jul 15 11:33:18.292822 systemd[1270]: Reached target default.target. Jul 15 11:33:18.292843 systemd[1270]: Startup finished in 63ms. Jul 15 11:33:18.292925 systemd[1]: Started user@500.service. Jul 15 11:33:18.293926 systemd[1]: Started session-1.scope. Jul 15 11:33:18.344182 systemd[1]: Started sshd@1-10.0.0.91:22-10.0.0.1:41906.service. Jul 15 11:33:18.373062 sshd[1279]: Accepted publickey for core from 10.0.0.1 port 41906 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:33:18.374176 sshd[1279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:33:18.377806 systemd-logind[1189]: New session 2 of user core. Jul 15 11:33:18.378631 systemd[1]: Started session-2.scope. Jul 15 11:33:18.431064 sshd[1279]: pam_unix(sshd:session): session closed for user core Jul 15 11:33:18.433698 systemd[1]: sshd@1-10.0.0.91:22-10.0.0.1:41906.service: Deactivated successfully. Jul 15 11:33:18.434313 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 11:33:18.434904 systemd-logind[1189]: Session 2 logged out. Waiting for processes to exit. Jul 15 11:33:18.436126 systemd[1]: Started sshd@2-10.0.0.91:22-10.0.0.1:41920.service. Jul 15 11:33:18.436830 systemd-logind[1189]: Removed session 2. Jul 15 11:33:18.464130 sshd[1285]: Accepted publickey for core from 10.0.0.1 port 41920 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:33:18.465150 sshd[1285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:33:18.468314 systemd-logind[1189]: New session 3 of user core. Jul 15 11:33:18.469269 systemd[1]: Started session-3.scope. Jul 15 11:33:18.518919 sshd[1285]: pam_unix(sshd:session): session closed for user core Jul 15 11:33:18.521937 systemd[1]: sshd@2-10.0.0.91:22-10.0.0.1:41920.service: Deactivated successfully. Jul 15 11:33:18.522458 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 11:33:18.522917 systemd-logind[1189]: Session 3 logged out. Waiting for processes to exit. Jul 15 11:33:18.523976 systemd[1]: Started sshd@3-10.0.0.91:22-10.0.0.1:41936.service. Jul 15 11:33:18.524594 systemd-logind[1189]: Removed session 3. Jul 15 11:33:18.552248 sshd[1291]: Accepted publickey for core from 10.0.0.1 port 41936 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:33:18.553208 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:33:18.556562 systemd-logind[1189]: New session 4 of user core. Jul 15 11:33:18.557248 systemd[1]: Started session-4.scope. Jul 15 11:33:18.609283 sshd[1291]: pam_unix(sshd:session): session closed for user core Jul 15 11:33:18.611578 systemd[1]: sshd@3-10.0.0.91:22-10.0.0.1:41936.service: Deactivated successfully. Jul 15 11:33:18.612041 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 11:33:18.612428 systemd-logind[1189]: Session 4 logged out. Waiting for processes to exit. Jul 15 11:33:18.613257 systemd[1]: Started sshd@4-10.0.0.91:22-10.0.0.1:41944.service. Jul 15 11:33:18.614067 systemd-logind[1189]: Removed session 4. Jul 15 11:33:18.640897 sshd[1297]: Accepted publickey for core from 10.0.0.1 port 41944 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:33:18.641848 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:33:18.644828 systemd-logind[1189]: New session 5 of user core. Jul 15 11:33:18.645543 systemd[1]: Started session-5.scope. Jul 15 11:33:18.699308 sudo[1300]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 11:33:18.699494 sudo[1300]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 15 11:33:18.718579 systemd[1]: Starting docker.service... Jul 15 11:33:18.747072 env[1312]: time="2025-07-15T11:33:18.747022987Z" level=info msg="Starting up" Jul 15 11:33:18.748173 env[1312]: time="2025-07-15T11:33:18.748141725Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 15 11:33:18.748173 env[1312]: time="2025-07-15T11:33:18.748156472Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 15 11:33:18.748250 env[1312]: time="2025-07-15T11:33:18.748178774Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 15 11:33:18.748250 env[1312]: time="2025-07-15T11:33:18.748197369Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 15 11:33:18.749627 env[1312]: time="2025-07-15T11:33:18.749607593Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 15 11:33:18.749627 env[1312]: time="2025-07-15T11:33:18.749624675Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 15 11:33:18.749690 env[1312]: time="2025-07-15T11:33:18.749639333Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 15 11:33:18.749690 env[1312]: time="2025-07-15T11:33:18.749649181Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 15 11:33:19.529512 env[1312]: time="2025-07-15T11:33:19.529467300Z" level=info msg="Loading containers: start." Jul 15 11:33:19.630908 kernel: Initializing XFRM netlink socket Jul 15 11:33:19.656608 env[1312]: time="2025-07-15T11:33:19.656573214Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 15 11:33:19.700861 systemd-networkd[1033]: docker0: Link UP Jul 15 11:33:19.717004 env[1312]: time="2025-07-15T11:33:19.716971180Z" level=info msg="Loading containers: done." Jul 15 11:33:19.726991 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2788398720-merged.mount: Deactivated successfully. Jul 15 11:33:19.727341 env[1312]: time="2025-07-15T11:33:19.727299145Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 11:33:19.727499 env[1312]: time="2025-07-15T11:33:19.727479102Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 15 11:33:19.727609 env[1312]: time="2025-07-15T11:33:19.727574741Z" level=info msg="Daemon has completed initialization" Jul 15 11:33:19.742651 systemd[1]: Started docker.service. Jul 15 11:33:19.748878 env[1312]: time="2025-07-15T11:33:19.748832089Z" level=info msg="API listen on /run/docker.sock" Jul 15 11:33:20.244753 env[1204]: time="2025-07-15T11:33:20.244707866Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 15 11:33:20.825493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount329318257.mount: Deactivated successfully. Jul 15 11:33:22.242731 env[1204]: time="2025-07-15T11:33:22.242672419Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:22.244316 env[1204]: time="2025-07-15T11:33:22.244288108Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:22.245857 env[1204]: time="2025-07-15T11:33:22.245811715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:22.247311 env[1204]: time="2025-07-15T11:33:22.247287352Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:22.247852 env[1204]: time="2025-07-15T11:33:22.247819359Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 15 11:33:22.248362 env[1204]: time="2025-07-15T11:33:22.248338874Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 15 11:33:24.751465 env[1204]: time="2025-07-15T11:33:24.751410622Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:24.753486 env[1204]: time="2025-07-15T11:33:24.753436361Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:24.755349 env[1204]: time="2025-07-15T11:33:24.755302630Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:24.756946 env[1204]: time="2025-07-15T11:33:24.756904854Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:24.757602 env[1204]: time="2025-07-15T11:33:24.757561295Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 15 11:33:24.758074 env[1204]: time="2025-07-15T11:33:24.758045403Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 15 11:33:26.178858 env[1204]: time="2025-07-15T11:33:26.178775722Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:26.180455 env[1204]: time="2025-07-15T11:33:26.180420586Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:26.182480 env[1204]: time="2025-07-15T11:33:26.182443949Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:26.184329 env[1204]: time="2025-07-15T11:33:26.184278099Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:26.185099 env[1204]: time="2025-07-15T11:33:26.185066286Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 15 11:33:26.185628 env[1204]: time="2025-07-15T11:33:26.185589057Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 15 11:33:26.734259 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 11:33:26.734432 systemd[1]: Stopped kubelet.service. Jul 15 11:33:26.735688 systemd[1]: Starting kubelet.service... Jul 15 11:33:26.824984 systemd[1]: Started kubelet.service. Jul 15 11:33:26.860124 kubelet[1448]: E0715 11:33:26.860069 1448 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:33:26.863039 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:33:26.863152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:33:27.774857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4132532322.mount: Deactivated successfully. Jul 15 11:33:28.716657 env[1204]: time="2025-07-15T11:33:28.716592177Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:28.719088 env[1204]: time="2025-07-15T11:33:28.719028134Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:28.720747 env[1204]: time="2025-07-15T11:33:28.720693156Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:28.722034 env[1204]: time="2025-07-15T11:33:28.721996840Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:28.722396 env[1204]: time="2025-07-15T11:33:28.722362887Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 15 11:33:28.722908 env[1204]: time="2025-07-15T11:33:28.722874256Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 15 11:33:29.257183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3536181391.mount: Deactivated successfully. Jul 15 11:33:30.809063 env[1204]: time="2025-07-15T11:33:30.809005020Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:30.810864 env[1204]: time="2025-07-15T11:33:30.810821125Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:30.812538 env[1204]: time="2025-07-15T11:33:30.812501546Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:30.814131 env[1204]: time="2025-07-15T11:33:30.814100104Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:30.814754 env[1204]: time="2025-07-15T11:33:30.814720667Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 15 11:33:30.815244 env[1204]: time="2025-07-15T11:33:30.815221546Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 11:33:31.323373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3147870844.mount: Deactivated successfully. Jul 15 11:33:31.328409 env[1204]: time="2025-07-15T11:33:31.328371663Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:31.330195 env[1204]: time="2025-07-15T11:33:31.330169203Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:31.331640 env[1204]: time="2025-07-15T11:33:31.331601338Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:31.333098 env[1204]: time="2025-07-15T11:33:31.333067167Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:31.333476 env[1204]: time="2025-07-15T11:33:31.333439705Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 11:33:31.333925 env[1204]: time="2025-07-15T11:33:31.333903835Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 15 11:33:32.714649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount135494193.mount: Deactivated successfully. Jul 15 11:33:35.538770 env[1204]: time="2025-07-15T11:33:35.538703193Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:35.540550 env[1204]: time="2025-07-15T11:33:35.540497397Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:35.542267 env[1204]: time="2025-07-15T11:33:35.542217733Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:35.543926 env[1204]: time="2025-07-15T11:33:35.543901109Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:35.544724 env[1204]: time="2025-07-15T11:33:35.544690579Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 15 11:33:36.984491 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 11:33:36.984696 systemd[1]: Stopped kubelet.service. Jul 15 11:33:36.985951 systemd[1]: Starting kubelet.service... Jul 15 11:33:37.065227 systemd[1]: Started kubelet.service. Jul 15 11:33:37.098485 kubelet[1482]: E0715 11:33:37.098430 1482 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:33:37.100196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:33:37.100306 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:33:37.460294 systemd[1]: Stopped kubelet.service. Jul 15 11:33:37.461999 systemd[1]: Starting kubelet.service... Jul 15 11:33:37.484892 systemd[1]: Reloading. Jul 15 11:33:37.553834 /usr/lib/systemd/system-generators/torcx-generator[1515]: time="2025-07-15T11:33:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:33:37.553870 /usr/lib/systemd/system-generators/torcx-generator[1515]: time="2025-07-15T11:33:37Z" level=info msg="torcx already run" Jul 15 11:33:38.456906 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:33:38.456925 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:33:38.474431 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:33:38.550538 systemd[1]: Started kubelet.service. Jul 15 11:33:38.551677 systemd[1]: Stopping kubelet.service... Jul 15 11:33:38.551901 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 11:33:38.552034 systemd[1]: Stopped kubelet.service. Jul 15 11:33:38.553249 systemd[1]: Starting kubelet.service... Jul 15 11:33:38.635529 systemd[1]: Started kubelet.service. Jul 15 11:33:38.665268 kubelet[1564]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:33:38.665268 kubelet[1564]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 11:33:38.665268 kubelet[1564]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:33:38.665620 kubelet[1564]: I0715 11:33:38.665283 1564 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 11:33:39.340430 kubelet[1564]: I0715 11:33:39.340380 1564 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 11:33:39.340430 kubelet[1564]: I0715 11:33:39.340410 1564 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 11:33:39.340681 kubelet[1564]: I0715 11:33:39.340621 1564 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 11:33:39.362575 kubelet[1564]: E0715 11:33:39.362536 1564 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 15 11:33:39.362742 kubelet[1564]: I0715 11:33:39.362691 1564 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 11:33:39.369332 kubelet[1564]: E0715 11:33:39.369289 1564 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 15 11:33:39.369332 kubelet[1564]: I0715 11:33:39.369321 1564 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 15 11:33:39.373151 kubelet[1564]: I0715 11:33:39.373121 1564 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 11:33:39.373359 kubelet[1564]: I0715 11:33:39.373326 1564 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 11:33:39.373512 kubelet[1564]: I0715 11:33:39.373351 1564 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 11:33:39.373512 kubelet[1564]: I0715 11:33:39.373510 1564 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 11:33:39.373619 kubelet[1564]: I0715 11:33:39.373519 1564 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 11:33:39.374219 kubelet[1564]: I0715 11:33:39.374199 1564 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:33:39.375677 kubelet[1564]: I0715 11:33:39.375642 1564 kubelet.go:480] "Attempting to sync node with API server" Jul 15 11:33:39.375677 kubelet[1564]: I0715 11:33:39.375658 1564 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 11:33:39.375677 kubelet[1564]: I0715 11:33:39.375676 1564 kubelet.go:386] "Adding apiserver pod source" Jul 15 11:33:39.376967 kubelet[1564]: I0715 11:33:39.376946 1564 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 11:33:39.385140 kubelet[1564]: E0715 11:33:39.385101 1564 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 11:33:39.389402 kubelet[1564]: E0715 11:33:39.389355 1564 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 11:33:39.398580 kubelet[1564]: I0715 11:33:39.398556 1564 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 15 11:33:39.399013 kubelet[1564]: I0715 11:33:39.398979 1564 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 11:33:39.399497 kubelet[1564]: W0715 11:33:39.399471 1564 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 11:33:39.401196 kubelet[1564]: I0715 11:33:39.401173 1564 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 11:33:39.401250 kubelet[1564]: I0715 11:33:39.401213 1564 server.go:1289] "Started kubelet" Jul 15 11:33:39.401307 kubelet[1564]: I0715 11:33:39.401264 1564 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 11:33:39.402013 kubelet[1564]: I0715 11:33:39.401992 1564 server.go:317] "Adding debug handlers to kubelet server" Jul 15 11:33:39.403723 kubelet[1564]: I0715 11:33:39.401260 1564 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 11:33:39.403723 kubelet[1564]: I0715 11:33:39.403308 1564 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 11:33:39.404613 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 15 11:33:39.404709 kubelet[1564]: I0715 11:33:39.404686 1564 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 11:33:39.404860 kubelet[1564]: E0715 11:33:39.403725 1564 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.91:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.91:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18526983b523aa9d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 11:33:39.401190045 +0000 UTC m=+0.762272281,LastTimestamp:2025-07-15 11:33:39.401190045 +0000 UTC m=+0.762272281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 11:33:39.404860 kubelet[1564]: I0715 11:33:39.404830 1564 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 11:33:39.405333 kubelet[1564]: E0715 11:33:39.405316 1564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:33:39.406030 kubelet[1564]: I0715 11:33:39.406003 1564 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 11:33:39.406354 kubelet[1564]: I0715 11:33:39.406339 1564 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 11:33:39.406396 kubelet[1564]: I0715 11:33:39.406384 1564 reconciler.go:26] "Reconciler: start to sync state" Jul 15 11:33:39.406773 kubelet[1564]: E0715 11:33:39.406737 1564 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 11:33:39.406839 kubelet[1564]: E0715 11:33:39.406807 1564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="200ms" Jul 15 11:33:39.407100 kubelet[1564]: I0715 11:33:39.407075 1564 factory.go:223] Registration of the systemd container factory successfully Jul 15 11:33:39.407169 kubelet[1564]: I0715 11:33:39.407151 1564 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 11:33:39.407337 kubelet[1564]: E0715 11:33:39.407319 1564 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 11:33:39.409010 kubelet[1564]: I0715 11:33:39.408988 1564 factory.go:223] Registration of the containerd container factory successfully Jul 15 11:33:39.419303 kubelet[1564]: I0715 11:33:39.419279 1564 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 11:33:39.419303 kubelet[1564]: I0715 11:33:39.419294 1564 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 11:33:39.419303 kubelet[1564]: I0715 11:33:39.419308 1564 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:33:39.422844 kubelet[1564]: I0715 11:33:39.422802 1564 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 11:33:39.423793 kubelet[1564]: I0715 11:33:39.423776 1564 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 11:33:39.423793 kubelet[1564]: I0715 11:33:39.423793 1564 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 11:33:39.423872 kubelet[1564]: I0715 11:33:39.423809 1564 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 11:33:39.423872 kubelet[1564]: I0715 11:33:39.423819 1564 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 11:33:39.423936 kubelet[1564]: E0715 11:33:39.423856 1564 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 11:33:39.424392 kubelet[1564]: E0715 11:33:39.424366 1564 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 11:33:39.507085 kubelet[1564]: E0715 11:33:39.507053 1564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:33:39.524339 kubelet[1564]: E0715 11:33:39.524310 1564 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 11:33:39.607841 kubelet[1564]: E0715 11:33:39.607677 1564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:33:39.608033 kubelet[1564]: E0715 11:33:39.607994 1564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="400ms" Jul 15 11:33:39.708465 kubelet[1564]: E0715 11:33:39.708418 1564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:33:39.724650 kubelet[1564]: E0715 11:33:39.724594 1564 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 11:33:39.809056 kubelet[1564]: E0715 11:33:39.809013 1564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:33:39.825900 kubelet[1564]: I0715 11:33:39.825862 1564 policy_none.go:49] "None policy: Start" Jul 15 11:33:39.825963 kubelet[1564]: I0715 11:33:39.825915 1564 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 11:33:39.825963 kubelet[1564]: I0715 11:33:39.825929 1564 state_mem.go:35] "Initializing new in-memory state store" Jul 15 11:33:39.835475 systemd[1]: Created slice kubepods.slice. Jul 15 11:33:39.838924 systemd[1]: Created slice kubepods-burstable.slice. Jul 15 11:33:39.841088 systemd[1]: Created slice kubepods-besteffort.slice. Jul 15 11:33:39.847723 kubelet[1564]: E0715 11:33:39.847690 1564 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 11:33:39.847838 kubelet[1564]: I0715 11:33:39.847823 1564 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 11:33:39.847977 kubelet[1564]: I0715 11:33:39.847837 1564 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 11:33:39.848599 kubelet[1564]: I0715 11:33:39.848055 1564 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 11:33:39.848916 kubelet[1564]: E0715 11:33:39.848652 1564 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 11:33:39.848916 kubelet[1564]: E0715 11:33:39.848681 1564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 11:33:39.949301 kubelet[1564]: I0715 11:33:39.949184 1564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:33:39.949573 kubelet[1564]: E0715 11:33:39.949525 1564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Jul 15 11:33:40.009485 kubelet[1564]: E0715 11:33:40.009437 1564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="800ms" Jul 15 11:33:40.151345 kubelet[1564]: I0715 11:33:40.151305 1564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:33:40.151685 kubelet[1564]: E0715 11:33:40.151634 1564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Jul 15 11:33:40.210257 kubelet[1564]: I0715 11:33:40.210146 1564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:33:40.210257 kubelet[1564]: I0715 11:33:40.210190 1564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:33:40.210257 kubelet[1564]: I0715 11:33:40.210212 1564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:33:40.210257 kubelet[1564]: I0715 11:33:40.210233 1564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:33:40.210416 kubelet[1564]: I0715 11:33:40.210292 1564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:33:40.341649 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 15 11:33:40.350505 kubelet[1564]: E0715 11:33:40.350472 1564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:33:40.350767 kubelet[1564]: E0715 11:33:40.350742 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:40.351333 env[1204]: time="2025-07-15T11:33:40.351284281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 15 11:33:40.376434 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 15 11:33:40.377772 kubelet[1564]: E0715 11:33:40.377750 1564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:33:40.387550 systemd[1]: Created slice kubepods-burstable-pod1ea08a3d6427ab1f414f9221c6261446.slice. Jul 15 11:33:40.389124 kubelet[1564]: E0715 11:33:40.389093 1564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:33:40.411036 kubelet[1564]: I0715 11:33:40.411014 1564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ea08a3d6427ab1f414f9221c6261446-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ea08a3d6427ab1f414f9221c6261446\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:33:40.411122 kubelet[1564]: I0715 11:33:40.411069 1564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ea08a3d6427ab1f414f9221c6261446-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ea08a3d6427ab1f414f9221c6261446\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:33:40.411122 kubelet[1564]: I0715 11:33:40.411087 1564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ea08a3d6427ab1f414f9221c6261446-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1ea08a3d6427ab1f414f9221c6261446\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:33:40.411122 kubelet[1564]: I0715 11:33:40.411112 1564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 15 11:33:40.544132 kubelet[1564]: E0715 11:33:40.544077 1564 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 11:33:40.553330 kubelet[1564]: I0715 11:33:40.553313 1564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:33:40.553621 kubelet[1564]: E0715 11:33:40.553581 1564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Jul 15 11:33:40.561053 kubelet[1564]: E0715 11:33:40.561030 1564 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 11:33:40.679023 kubelet[1564]: E0715 11:33:40.678983 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:40.679461 env[1204]: time="2025-07-15T11:33:40.679424103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 15 11:33:40.689670 kubelet[1564]: E0715 11:33:40.689641 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:40.690020 env[1204]: time="2025-07-15T11:33:40.689976720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1ea08a3d6427ab1f414f9221c6261446,Namespace:kube-system,Attempt:0,}" Jul 15 11:33:40.810279 kubelet[1564]: E0715 11:33:40.810172 1564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="1.6s" Jul 15 11:33:40.858172 kubelet[1564]: E0715 11:33:40.858136 1564 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 11:33:40.869878 kubelet[1564]: E0715 11:33:40.869838 1564 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 11:33:40.933198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1231863261.mount: Deactivated successfully. Jul 15 11:33:40.942770 env[1204]: time="2025-07-15T11:33:40.942713140Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:40.946901 env[1204]: time="2025-07-15T11:33:40.946846941Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:40.947858 env[1204]: time="2025-07-15T11:33:40.947805529Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:40.949787 env[1204]: time="2025-07-15T11:33:40.949733624Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:40.951374 env[1204]: time="2025-07-15T11:33:40.951332221Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:40.952536 env[1204]: time="2025-07-15T11:33:40.952500101Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:40.954002 env[1204]: time="2025-07-15T11:33:40.953975718Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:40.956211 env[1204]: time="2025-07-15T11:33:40.956175041Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:40.958216 env[1204]: time="2025-07-15T11:33:40.958186553Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:40.958933 env[1204]: time="2025-07-15T11:33:40.958900221Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:40.959929 env[1204]: time="2025-07-15T11:33:40.959901559Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:40.961346 env[1204]: time="2025-07-15T11:33:40.961309378Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:33:40.972800 env[1204]: time="2025-07-15T11:33:40.972708582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:33:40.972800 env[1204]: time="2025-07-15T11:33:40.972759237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:33:40.972800 env[1204]: time="2025-07-15T11:33:40.972769266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:33:40.972993 env[1204]: time="2025-07-15T11:33:40.972947991Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4097660bbb6c06147b16c0954a342092446d8d0bb27cbec3d4dd1eb78e1fc073 pid=1609 runtime=io.containerd.runc.v2 Jul 15 11:33:40.986865 systemd[1]: Started cri-containerd-4097660bbb6c06147b16c0954a342092446d8d0bb27cbec3d4dd1eb78e1fc073.scope. Jul 15 11:33:40.999288 env[1204]: time="2025-07-15T11:33:40.999163875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:33:40.999288 env[1204]: time="2025-07-15T11:33:40.999209651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:33:40.999288 env[1204]: time="2025-07-15T11:33:40.999222105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:33:40.999511 env[1204]: time="2025-07-15T11:33:40.999452076Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3aa297b123b1de1a9c7935e6e8d422523ee21add51bf17112799b7e19e7cbdff pid=1656 runtime=io.containerd.runc.v2 Jul 15 11:33:40.999921 env[1204]: time="2025-07-15T11:33:40.999853769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:33:40.999999 env[1204]: time="2025-07-15T11:33:40.999930483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:33:40.999999 env[1204]: time="2025-07-15T11:33:40.999982050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:33:41.000250 env[1204]: time="2025-07-15T11:33:41.000205128Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/991133e19fe0cc9278cc08390408446181358d182a8c0584e0aaa060c6a0292f pid=1655 runtime=io.containerd.runc.v2 Jul 15 11:33:41.011095 systemd[1]: Started cri-containerd-991133e19fe0cc9278cc08390408446181358d182a8c0584e0aaa060c6a0292f.scope. Jul 15 11:33:41.021462 systemd[1]: Started cri-containerd-3aa297b123b1de1a9c7935e6e8d422523ee21add51bf17112799b7e19e7cbdff.scope. Jul 15 11:33:41.023408 env[1204]: time="2025-07-15T11:33:41.023348201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4097660bbb6c06147b16c0954a342092446d8d0bb27cbec3d4dd1eb78e1fc073\"" Jul 15 11:33:41.024602 kubelet[1564]: E0715 11:33:41.024564 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:41.030500 env[1204]: time="2025-07-15T11:33:41.030459014Z" level=info msg="CreateContainer within sandbox \"4097660bbb6c06147b16c0954a342092446d8d0bb27cbec3d4dd1eb78e1fc073\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 11:33:41.046949 env[1204]: time="2025-07-15T11:33:41.046668176Z" level=info msg="CreateContainer within sandbox \"4097660bbb6c06147b16c0954a342092446d8d0bb27cbec3d4dd1eb78e1fc073\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"acda161b18c1204d0e6f9e66781b0258c5b0ff47f0c17a993bc174485d09462a\"" Jul 15 11:33:41.048107 env[1204]: time="2025-07-15T11:33:41.048070105Z" level=info msg="StartContainer for \"acda161b18c1204d0e6f9e66781b0258c5b0ff47f0c17a993bc174485d09462a\"" Jul 15 11:33:41.056308 env[1204]: time="2025-07-15T11:33:41.056261273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"991133e19fe0cc9278cc08390408446181358d182a8c0584e0aaa060c6a0292f\"" Jul 15 11:33:41.057335 kubelet[1564]: E0715 11:33:41.057141 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:41.062596 env[1204]: time="2025-07-15T11:33:41.062520199Z" level=info msg="CreateContainer within sandbox \"991133e19fe0cc9278cc08390408446181358d182a8c0584e0aaa060c6a0292f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 11:33:41.069736 env[1204]: time="2025-07-15T11:33:41.069698638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1ea08a3d6427ab1f414f9221c6261446,Namespace:kube-system,Attempt:0,} returns sandbox id \"3aa297b123b1de1a9c7935e6e8d422523ee21add51bf17112799b7e19e7cbdff\"" Jul 15 11:33:41.070474 kubelet[1564]: E0715 11:33:41.070445 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:41.074278 env[1204]: time="2025-07-15T11:33:41.074256264Z" level=info msg="CreateContainer within sandbox \"3aa297b123b1de1a9c7935e6e8d422523ee21add51bf17112799b7e19e7cbdff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 11:33:41.078060 systemd[1]: Started cri-containerd-acda161b18c1204d0e6f9e66781b0258c5b0ff47f0c17a993bc174485d09462a.scope. Jul 15 11:33:41.082759 env[1204]: time="2025-07-15T11:33:41.082702130Z" level=info msg="CreateContainer within sandbox \"991133e19fe0cc9278cc08390408446181358d182a8c0584e0aaa060c6a0292f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ea48a33a7b2443f508bd153b4fcf4347e40b654594dfdc1a251ef757cfcdbc16\"" Jul 15 11:33:41.083158 env[1204]: time="2025-07-15T11:33:41.083127607Z" level=info msg="StartContainer for \"ea48a33a7b2443f508bd153b4fcf4347e40b654594dfdc1a251ef757cfcdbc16\"" Jul 15 11:33:41.095573 env[1204]: time="2025-07-15T11:33:41.095529611Z" level=info msg="CreateContainer within sandbox \"3aa297b123b1de1a9c7935e6e8d422523ee21add51bf17112799b7e19e7cbdff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"07746e9a8da2d9680a8b02d911192066d12f01590ad4be2bf4846c3768ba83b4\"" Jul 15 11:33:41.096070 env[1204]: time="2025-07-15T11:33:41.096040980Z" level=info msg="StartContainer for \"07746e9a8da2d9680a8b02d911192066d12f01590ad4be2bf4846c3768ba83b4\"" Jul 15 11:33:41.100590 systemd[1]: Started cri-containerd-ea48a33a7b2443f508bd153b4fcf4347e40b654594dfdc1a251ef757cfcdbc16.scope. Jul 15 11:33:41.116386 systemd[1]: Started cri-containerd-07746e9a8da2d9680a8b02d911192066d12f01590ad4be2bf4846c3768ba83b4.scope. Jul 15 11:33:41.130908 env[1204]: time="2025-07-15T11:33:41.130847833Z" level=info msg="StartContainer for \"acda161b18c1204d0e6f9e66781b0258c5b0ff47f0c17a993bc174485d09462a\" returns successfully" Jul 15 11:33:41.143219 env[1204]: time="2025-07-15T11:33:41.143177651Z" level=info msg="StartContainer for \"ea48a33a7b2443f508bd153b4fcf4347e40b654594dfdc1a251ef757cfcdbc16\" returns successfully" Jul 15 11:33:41.154327 env[1204]: time="2025-07-15T11:33:41.153922317Z" level=info msg="StartContainer for \"07746e9a8da2d9680a8b02d911192066d12f01590ad4be2bf4846c3768ba83b4\" returns successfully" Jul 15 11:33:41.355519 kubelet[1564]: I0715 11:33:41.354909 1564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:33:41.429302 kubelet[1564]: E0715 11:33:41.429273 1564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:33:41.429393 kubelet[1564]: E0715 11:33:41.429370 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:41.430677 kubelet[1564]: E0715 11:33:41.430655 1564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:33:41.430746 kubelet[1564]: E0715 11:33:41.430725 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:41.431946 kubelet[1564]: E0715 11:33:41.431926 1564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:33:41.432021 kubelet[1564]: E0715 11:33:41.432000 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:42.425973 kubelet[1564]: E0715 11:33:42.425931 1564 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 15 11:33:42.435137 kubelet[1564]: E0715 11:33:42.435104 1564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:33:42.435357 kubelet[1564]: E0715 11:33:42.435330 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:42.435514 kubelet[1564]: E0715 11:33:42.435492 1564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:33:42.435607 kubelet[1564]: E0715 11:33:42.435581 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:42.489600 kubelet[1564]: I0715 11:33:42.489562 1564 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 11:33:42.489669 kubelet[1564]: E0715 11:33:42.489602 1564 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 15 11:33:42.507903 kubelet[1564]: E0715 11:33:42.507862 1564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:33:42.608567 kubelet[1564]: E0715 11:33:42.608529 1564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:33:42.709603 kubelet[1564]: E0715 11:33:42.709494 1564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:33:42.810219 kubelet[1564]: E0715 11:33:42.810179 1564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:33:42.910566 kubelet[1564]: E0715 11:33:42.910537 1564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:33:43.011500 kubelet[1564]: E0715 11:33:43.011469 1564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:33:43.112241 kubelet[1564]: E0715 11:33:43.112199 1564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:33:43.207689 kubelet[1564]: I0715 11:33:43.207654 1564 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 11:33:43.212781 kubelet[1564]: E0715 11:33:43.212747 1564 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 15 11:33:43.212781 kubelet[1564]: I0715 11:33:43.212779 1564 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 11:33:43.214168 kubelet[1564]: E0715 11:33:43.214137 1564 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 15 11:33:43.214168 kubelet[1564]: I0715 11:33:43.214159 1564 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 11:33:43.215247 kubelet[1564]: E0715 11:33:43.215226 1564 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 15 11:33:43.387229 kubelet[1564]: I0715 11:33:43.387123 1564 apiserver.go:52] "Watching apiserver" Jul 15 11:33:43.406509 kubelet[1564]: I0715 11:33:43.406477 1564 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 11:33:43.434959 kubelet[1564]: I0715 11:33:43.434870 1564 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 11:33:43.438590 kubelet[1564]: E0715 11:33:43.438564 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:44.435742 kubelet[1564]: E0715 11:33:44.435698 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:44.573453 systemd[1]: Reloading. Jul 15 11:33:44.638243 /usr/lib/systemd/system-generators/torcx-generator[1876]: time="2025-07-15T11:33:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:33:44.638272 /usr/lib/systemd/system-generators/torcx-generator[1876]: time="2025-07-15T11:33:44Z" level=info msg="torcx already run" Jul 15 11:33:44.700855 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:33:44.700871 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:33:44.717864 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:33:44.806354 systemd[1]: Stopping kubelet.service... Jul 15 11:33:44.829264 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 11:33:44.829434 systemd[1]: Stopped kubelet.service. Jul 15 11:33:44.829473 systemd[1]: kubelet.service: Consumed 1.109s CPU time. Jul 15 11:33:44.830958 systemd[1]: Starting kubelet.service... Jul 15 11:33:44.922481 systemd[1]: Started kubelet.service. Jul 15 11:33:44.954597 kubelet[1922]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:33:44.954597 kubelet[1922]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 11:33:44.954597 kubelet[1922]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:33:44.954597 kubelet[1922]: I0715 11:33:44.954553 1922 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 11:33:44.962763 kubelet[1922]: I0715 11:33:44.962726 1922 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 11:33:44.962763 kubelet[1922]: I0715 11:33:44.962752 1922 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 11:33:44.963209 kubelet[1922]: I0715 11:33:44.963190 1922 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 11:33:44.965007 kubelet[1922]: I0715 11:33:44.964922 1922 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 15 11:33:44.966935 kubelet[1922]: I0715 11:33:44.966912 1922 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 11:33:44.970480 kubelet[1922]: E0715 11:33:44.970453 1922 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 15 11:33:44.970480 kubelet[1922]: I0715 11:33:44.970479 1922 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 15 11:33:44.973672 kubelet[1922]: I0715 11:33:44.973656 1922 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 11:33:44.973913 kubelet[1922]: I0715 11:33:44.973834 1922 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 11:33:44.974034 kubelet[1922]: I0715 11:33:44.973860 1922 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 11:33:44.974119 kubelet[1922]: I0715 11:33:44.974039 1922 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 11:33:44.974119 kubelet[1922]: I0715 11:33:44.974048 1922 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 11:33:44.974119 kubelet[1922]: I0715 11:33:44.974101 1922 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:33:44.975197 kubelet[1922]: I0715 11:33:44.975174 1922 kubelet.go:480] "Attempting to sync node with API server" Jul 15 11:33:44.975197 kubelet[1922]: I0715 11:33:44.975190 1922 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 11:33:44.975197 kubelet[1922]: I0715 11:33:44.975207 1922 kubelet.go:386] "Adding apiserver pod source" Jul 15 11:33:44.975197 kubelet[1922]: I0715 11:33:44.975218 1922 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 11:33:44.976387 kubelet[1922]: I0715 11:33:44.976360 1922 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 15 11:33:44.980969 kubelet[1922]: I0715 11:33:44.976842 1922 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 11:33:44.980969 kubelet[1922]: I0715 11:33:44.979072 1922 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 11:33:44.980969 kubelet[1922]: I0715 11:33:44.979106 1922 server.go:1289] "Started kubelet" Jul 15 11:33:44.980969 kubelet[1922]: I0715 11:33:44.980946 1922 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 11:33:44.981282 kubelet[1922]: I0715 11:33:44.981265 1922 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 11:33:44.981357 kubelet[1922]: I0715 11:33:44.981313 1922 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 11:33:44.982210 kubelet[1922]: I0715 11:33:44.982195 1922 server.go:317] "Adding debug handlers to kubelet server" Jul 15 11:33:44.984465 kubelet[1922]: I0715 11:33:44.984437 1922 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 11:33:44.984783 kubelet[1922]: I0715 11:33:44.984761 1922 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 11:33:44.990314 kubelet[1922]: E0715 11:33:44.990285 1922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:33:44.990407 kubelet[1922]: I0715 11:33:44.990387 1922 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 11:33:44.990499 kubelet[1922]: I0715 11:33:44.990471 1922 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 11:33:44.990742 kubelet[1922]: I0715 11:33:44.990715 1922 reconciler.go:26] "Reconciler: start to sync state" Jul 15 11:33:44.992961 kubelet[1922]: I0715 11:33:44.992940 1922 factory.go:223] Registration of the systemd container factory successfully Jul 15 11:33:44.993020 kubelet[1922]: I0715 11:33:44.993010 1922 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 11:33:44.994038 kubelet[1922]: I0715 11:33:44.994022 1922 factory.go:223] Registration of the containerd container factory successfully Jul 15 11:33:44.994463 kubelet[1922]: E0715 11:33:44.994382 1922 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 11:33:45.001488 kubelet[1922]: I0715 11:33:45.001443 1922 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 11:33:45.002791 kubelet[1922]: I0715 11:33:45.002765 1922 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 11:33:45.002791 kubelet[1922]: I0715 11:33:45.002783 1922 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 11:33:45.002864 kubelet[1922]: I0715 11:33:45.002800 1922 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 11:33:45.002864 kubelet[1922]: I0715 11:33:45.002810 1922 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 11:33:45.002864 kubelet[1922]: E0715 11:33:45.002847 1922 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 11:33:45.018004 kubelet[1922]: I0715 11:33:45.017987 1922 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 11:33:45.018004 kubelet[1922]: I0715 11:33:45.018000 1922 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 11:33:45.018085 kubelet[1922]: I0715 11:33:45.018017 1922 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:33:45.018134 kubelet[1922]: I0715 11:33:45.018116 1922 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 11:33:45.018134 kubelet[1922]: I0715 11:33:45.018128 1922 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 11:33:45.018203 kubelet[1922]: I0715 11:33:45.018142 1922 policy_none.go:49] "None policy: Start" Jul 15 11:33:45.018203 kubelet[1922]: I0715 11:33:45.018150 1922 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 11:33:45.018203 kubelet[1922]: I0715 11:33:45.018158 1922 state_mem.go:35] "Initializing new in-memory state store" Jul 15 11:33:45.018267 kubelet[1922]: I0715 11:33:45.018234 1922 state_mem.go:75] "Updated machine memory state" Jul 15 11:33:45.022112 kubelet[1922]: E0715 11:33:45.022086 1922 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 11:33:45.022260 kubelet[1922]: I0715 11:33:45.022240 1922 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 11:33:45.022308 kubelet[1922]: I0715 11:33:45.022256 1922 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 11:33:45.022765 kubelet[1922]: I0715 11:33:45.022745 1922 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 11:33:45.024284 kubelet[1922]: E0715 11:33:45.024260 1922 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 11:33:45.103901 kubelet[1922]: I0715 11:33:45.103861 1922 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 11:33:45.104056 kubelet[1922]: I0715 11:33:45.104028 1922 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 11:33:45.104255 kubelet[1922]: I0715 11:33:45.104220 1922 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 11:33:45.110741 kubelet[1922]: E0715 11:33:45.110716 1922 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 15 11:33:45.125640 kubelet[1922]: I0715 11:33:45.125617 1922 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:33:45.131788 kubelet[1922]: I0715 11:33:45.131765 1922 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 15 11:33:45.131856 kubelet[1922]: I0715 11:33:45.131839 1922 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 11:33:45.192314 kubelet[1922]: I0715 11:33:45.192287 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:33:45.192397 kubelet[1922]: I0715 11:33:45.192323 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 15 11:33:45.192397 kubelet[1922]: I0715 11:33:45.192346 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ea08a3d6427ab1f414f9221c6261446-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1ea08a3d6427ab1f414f9221c6261446\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:33:45.192397 kubelet[1922]: I0715 11:33:45.192366 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:33:45.192397 kubelet[1922]: I0715 11:33:45.192378 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:33:45.192397 kubelet[1922]: I0715 11:33:45.192392 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:33:45.192509 kubelet[1922]: I0715 11:33:45.192409 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ea08a3d6427ab1f414f9221c6261446-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ea08a3d6427ab1f414f9221c6261446\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:33:45.192509 kubelet[1922]: I0715 11:33:45.192426 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ea08a3d6427ab1f414f9221c6261446-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ea08a3d6427ab1f414f9221c6261446\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:33:45.192509 kubelet[1922]: I0715 11:33:45.192479 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:33:45.410605 kubelet[1922]: E0715 11:33:45.410551 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:45.410605 kubelet[1922]: E0715 11:33:45.410595 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:45.411907 kubelet[1922]: E0715 11:33:45.411853 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:45.572337 sudo[1960]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 15 11:33:45.572582 sudo[1960]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 15 11:33:45.976514 kubelet[1922]: I0715 11:33:45.976482 1922 apiserver.go:52] "Watching apiserver" Jul 15 11:33:45.991559 kubelet[1922]: I0715 11:33:45.991534 1922 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 11:33:46.011928 kubelet[1922]: I0715 11:33:46.011905 1922 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 11:33:46.012279 kubelet[1922]: I0715 11:33:46.012258 1922 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 11:33:46.012374 kubelet[1922]: E0715 11:33:46.012355 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:46.016288 kubelet[1922]: E0715 11:33:46.016269 1922 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 15 11:33:46.016494 kubelet[1922]: E0715 11:33:46.016480 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:46.017403 kubelet[1922]: E0715 11:33:46.017326 1922 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 11:33:46.017561 kubelet[1922]: E0715 11:33:46.017462 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:46.025479 sudo[1960]: pam_unix(sudo:session): session closed for user root Jul 15 11:33:46.029906 kubelet[1922]: I0715 11:33:46.029847 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.029838699 podStartE2EDuration="3.029838699s" podCreationTimestamp="2025-07-15 11:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:33:46.029621141 +0000 UTC m=+1.103902939" watchObservedRunningTime="2025-07-15 11:33:46.029838699 +0000 UTC m=+1.104120496" Jul 15 11:33:46.045600 kubelet[1922]: I0715 11:33:46.045547 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.04552991 podStartE2EDuration="1.04552991s" podCreationTimestamp="2025-07-15 11:33:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:33:46.036957487 +0000 UTC m=+1.111239284" watchObservedRunningTime="2025-07-15 11:33:46.04552991 +0000 UTC m=+1.119811707" Jul 15 11:33:46.054903 kubelet[1922]: I0715 11:33:46.054835 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.054817775 podStartE2EDuration="1.054817775s" podCreationTimestamp="2025-07-15 11:33:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:33:46.046270158 +0000 UTC m=+1.120551955" watchObservedRunningTime="2025-07-15 11:33:46.054817775 +0000 UTC m=+1.129099572" Jul 15 11:33:47.013535 kubelet[1922]: E0715 11:33:47.013503 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:47.013945 kubelet[1922]: E0715 11:33:47.013928 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:47.217350 sudo[1300]: pam_unix(sudo:session): session closed for user root Jul 15 11:33:47.218773 sshd[1297]: pam_unix(sshd:session): session closed for user core Jul 15 11:33:47.221372 systemd[1]: sshd@4-10.0.0.91:22-10.0.0.1:41944.service: Deactivated successfully. Jul 15 11:33:47.222094 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 11:33:47.222225 systemd[1]: session-5.scope: Consumed 3.639s CPU time. Jul 15 11:33:47.222786 systemd-logind[1189]: Session 5 logged out. Waiting for processes to exit. Jul 15 11:33:47.223579 systemd-logind[1189]: Removed session 5. Jul 15 11:33:48.015009 kubelet[1922]: E0715 11:33:48.014972 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:50.229270 kubelet[1922]: I0715 11:33:50.229237 1922 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 11:33:50.229640 env[1204]: time="2025-07-15T11:33:50.229605307Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 11:33:50.229824 kubelet[1922]: I0715 11:33:50.229810 1922 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 11:33:50.556708 kubelet[1922]: E0715 11:33:50.556655 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:51.266969 kubelet[1922]: E0715 11:33:51.266919 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:51.651986 systemd[1]: Created slice kubepods-besteffort-pod2e7317e8_21f0_443c_8d20_7186c93f35d0.slice. Jul 15 11:33:51.667542 systemd[1]: Created slice kubepods-burstable-pod8c3f01ce_9388_41a4_9a1d_b6b5740073e9.slice. Jul 15 11:33:51.707325 systemd[1]: Created slice kubepods-besteffort-pode12f1a0d_794c_4eee_bb98_18645a855876.slice. Jul 15 11:33:51.735802 kubelet[1922]: I0715 11:33:51.735727 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-lib-modules\") pod \"cilium-86vm8\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " pod="kube-system/cilium-86vm8" Jul 15 11:33:51.735802 kubelet[1922]: I0715 11:33:51.735783 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-cilium-config-path\") pod \"cilium-86vm8\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " pod="kube-system/cilium-86vm8" Jul 15 11:33:51.735802 kubelet[1922]: I0715 11:33:51.735799 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-host-proc-sys-net\") pod \"cilium-86vm8\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " pod="kube-system/cilium-86vm8" Jul 15 11:33:51.735802 kubelet[1922]: I0715 11:33:51.735812 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-hubble-tls\") pod \"cilium-86vm8\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " pod="kube-system/cilium-86vm8" Jul 15 11:33:51.736050 kubelet[1922]: I0715 11:33:51.735928 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkpps\" (UniqueName: \"kubernetes.io/projected/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-kube-api-access-xkpps\") pod \"cilium-86vm8\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " pod="kube-system/cilium-86vm8" Jul 15 11:33:51.736050 kubelet[1922]: I0715 11:33:51.735991 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-etc-cni-netd\") pod \"cilium-86vm8\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " pod="kube-system/cilium-86vm8" Jul 15 11:33:51.736050 kubelet[1922]: I0715 11:33:51.736009 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e7317e8-21f0-443c-8d20-7186c93f35d0-lib-modules\") pod \"kube-proxy-sbcls\" (UID: \"2e7317e8-21f0-443c-8d20-7186c93f35d0\") " pod="kube-system/kube-proxy-sbcls" Jul 15 11:33:51.736126 kubelet[1922]: I0715 11:33:51.736024 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db84q\" (UniqueName: \"kubernetes.io/projected/e12f1a0d-794c-4eee-bb98-18645a855876-kube-api-access-db84q\") pod \"cilium-operator-6c4d7847fc-jk879\" (UID: \"e12f1a0d-794c-4eee-bb98-18645a855876\") " pod="kube-system/cilium-operator-6c4d7847fc-jk879" Jul 15 11:33:51.736126 kubelet[1922]: I0715 11:33:51.736072 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e7317e8-21f0-443c-8d20-7186c93f35d0-xtables-lock\") pod \"kube-proxy-sbcls\" (UID: \"2e7317e8-21f0-443c-8d20-7186c93f35d0\") " pod="kube-system/kube-proxy-sbcls" Jul 15 11:33:51.736126 kubelet[1922]: I0715 11:33:51.736086 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-xtables-lock\") pod \"cilium-86vm8\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " pod="kube-system/cilium-86vm8" Jul 15 11:33:51.736126 kubelet[1922]: I0715 11:33:51.736102 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-clustermesh-secrets\") pod \"cilium-86vm8\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " pod="kube-system/cilium-86vm8" Jul 15 11:33:51.736214 kubelet[1922]: I0715 11:33:51.736147 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e7317e8-21f0-443c-8d20-7186c93f35d0-kube-proxy\") pod \"kube-proxy-sbcls\" (UID: \"2e7317e8-21f0-443c-8d20-7186c93f35d0\") " pod="kube-system/kube-proxy-sbcls" Jul 15 11:33:51.736214 kubelet[1922]: I0715 11:33:51.736169 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-cilium-run\") pod \"cilium-86vm8\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " pod="kube-system/cilium-86vm8" Jul 15 11:33:51.736214 kubelet[1922]: I0715 11:33:51.736183 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-bpf-maps\") pod \"cilium-86vm8\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " pod="kube-system/cilium-86vm8" Jul 15 11:33:51.736214 kubelet[1922]: I0715 11:33:51.736196 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-hostproc\") pod \"cilium-86vm8\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " pod="kube-system/cilium-86vm8" Jul 15 11:33:51.736214 kubelet[1922]: I0715 11:33:51.736211 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-cni-path\") pod \"cilium-86vm8\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " pod="kube-system/cilium-86vm8" Jul 15 11:33:51.736347 kubelet[1922]: I0715 11:33:51.736223 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-host-proc-sys-kernel\") pod \"cilium-86vm8\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " pod="kube-system/cilium-86vm8" Jul 15 11:33:51.736347 kubelet[1922]: I0715 11:33:51.736240 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4wqk\" (UniqueName: \"kubernetes.io/projected/2e7317e8-21f0-443c-8d20-7186c93f35d0-kube-api-access-p4wqk\") pod \"kube-proxy-sbcls\" (UID: \"2e7317e8-21f0-443c-8d20-7186c93f35d0\") " pod="kube-system/kube-proxy-sbcls" Jul 15 11:33:51.736347 kubelet[1922]: I0715 11:33:51.736258 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-cilium-cgroup\") pod \"cilium-86vm8\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " pod="kube-system/cilium-86vm8" Jul 15 11:33:51.736347 kubelet[1922]: I0715 11:33:51.736285 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e12f1a0d-794c-4eee-bb98-18645a855876-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jk879\" (UID: \"e12f1a0d-794c-4eee-bb98-18645a855876\") " pod="kube-system/cilium-operator-6c4d7847fc-jk879" Jul 15 11:33:51.837449 kubelet[1922]: I0715 11:33:51.837369 1922 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 15 11:33:51.964302 kubelet[1922]: E0715 11:33:51.964206 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:51.965509 env[1204]: time="2025-07-15T11:33:51.965272366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sbcls,Uid:2e7317e8-21f0-443c-8d20-7186c93f35d0,Namespace:kube-system,Attempt:0,}" Jul 15 11:33:51.972158 kubelet[1922]: E0715 11:33:51.972131 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:51.972622 env[1204]: time="2025-07-15T11:33:51.972545452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-86vm8,Uid:8c3f01ce-9388-41a4-9a1d-b6b5740073e9,Namespace:kube-system,Attempt:0,}" Jul 15 11:33:51.980582 env[1204]: time="2025-07-15T11:33:51.980509529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:33:51.980582 env[1204]: time="2025-07-15T11:33:51.980551439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:33:51.980582 env[1204]: time="2025-07-15T11:33:51.980565305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:33:51.980762 env[1204]: time="2025-07-15T11:33:51.980697245Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0390bcee19f126091c6fb19831f1035f03df060de5a4d83165770f6f37b5f0b6 pid=2020 runtime=io.containerd.runc.v2 Jul 15 11:33:51.989614 env[1204]: time="2025-07-15T11:33:51.989551882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:33:51.989736 env[1204]: time="2025-07-15T11:33:51.989625331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:33:51.989736 env[1204]: time="2025-07-15T11:33:51.989649147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:33:51.989816 env[1204]: time="2025-07-15T11:33:51.989758624Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07 pid=2040 runtime=io.containerd.runc.v2 Jul 15 11:33:51.992576 systemd[1]: Started cri-containerd-0390bcee19f126091c6fb19831f1035f03df060de5a4d83165770f6f37b5f0b6.scope. Jul 15 11:33:52.003510 systemd[1]: Started cri-containerd-464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07.scope. Jul 15 11:33:52.010223 kubelet[1922]: E0715 11:33:52.010187 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:52.012115 env[1204]: time="2025-07-15T11:33:52.010855088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jk879,Uid:e12f1a0d-794c-4eee-bb98-18645a855876,Namespace:kube-system,Attempt:0,}" Jul 15 11:33:52.019715 kubelet[1922]: E0715 11:33:52.019686 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:52.027503 env[1204]: time="2025-07-15T11:33:52.027444994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-86vm8,Uid:8c3f01ce-9388-41a4-9a1d-b6b5740073e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07\"" Jul 15 11:33:52.029804 env[1204]: time="2025-07-15T11:33:52.029781313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sbcls,Uid:2e7317e8-21f0-443c-8d20-7186c93f35d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"0390bcee19f126091c6fb19831f1035f03df060de5a4d83165770f6f37b5f0b6\"" Jul 15 11:33:52.033504 kubelet[1922]: E0715 11:33:52.031509 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:52.033504 kubelet[1922]: E0715 11:33:52.031614 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:52.033591 env[1204]: time="2025-07-15T11:33:52.032618832Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 15 11:33:52.070768 env[1204]: time="2025-07-15T11:33:52.070729843Z" level=info msg="CreateContainer within sandbox \"0390bcee19f126091c6fb19831f1035f03df060de5a4d83165770f6f37b5f0b6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 11:33:52.092647 env[1204]: time="2025-07-15T11:33:52.092585381Z" level=info msg="CreateContainer within sandbox \"0390bcee19f126091c6fb19831f1035f03df060de5a4d83165770f6f37b5f0b6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0d54c90d8ae362a39f7dc03d948408043cb88947e07935dc609b258f81bf0ef1\"" Jul 15 11:33:52.093440 env[1204]: time="2025-07-15T11:33:52.093401448Z" level=info msg="StartContainer for \"0d54c90d8ae362a39f7dc03d948408043cb88947e07935dc609b258f81bf0ef1\"" Jul 15 11:33:52.094830 env[1204]: time="2025-07-15T11:33:52.094771715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:33:52.094830 env[1204]: time="2025-07-15T11:33:52.094806952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:33:52.094830 env[1204]: time="2025-07-15T11:33:52.094817643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:33:52.095147 env[1204]: time="2025-07-15T11:33:52.094999627Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3991126d789e1f2c23c89f3778b13f1f3899bd031c7a42c4d1f2a2457a02b2b pid=2098 runtime=io.containerd.runc.v2 Jul 15 11:33:52.104681 systemd[1]: Started cri-containerd-b3991126d789e1f2c23c89f3778b13f1f3899bd031c7a42c4d1f2a2457a02b2b.scope. Jul 15 11:33:52.107801 systemd[1]: Started cri-containerd-0d54c90d8ae362a39f7dc03d948408043cb88947e07935dc609b258f81bf0ef1.scope. Jul 15 11:33:52.135747 env[1204]: time="2025-07-15T11:33:52.134562711Z" level=info msg="StartContainer for \"0d54c90d8ae362a39f7dc03d948408043cb88947e07935dc609b258f81bf0ef1\" returns successfully" Jul 15 11:33:52.144163 env[1204]: time="2025-07-15T11:33:52.144115929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jk879,Uid:e12f1a0d-794c-4eee-bb98-18645a855876,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3991126d789e1f2c23c89f3778b13f1f3899bd031c7a42c4d1f2a2457a02b2b\"" Jul 15 11:33:52.144656 kubelet[1922]: E0715 11:33:52.144622 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:53.023756 kubelet[1922]: E0715 11:33:53.023725 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:53.024133 kubelet[1922]: E0715 11:33:53.023855 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:53.033152 kubelet[1922]: I0715 11:33:53.033088 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sbcls" podStartSLOduration=2.033074045 podStartE2EDuration="2.033074045s" podCreationTimestamp="2025-07-15 11:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:33:53.032690619 +0000 UTC m=+8.106972416" watchObservedRunningTime="2025-07-15 11:33:53.033074045 +0000 UTC m=+8.107355842" Jul 15 11:33:56.861391 kubelet[1922]: E0715 11:33:56.861349 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:33:59.582562 update_engine[1192]: I0715 11:33:59.582480 1192 update_attempter.cc:509] Updating boot flags... Jul 15 11:33:59.617970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4221268209.mount: Deactivated successfully. Jul 15 11:34:00.564198 kubelet[1922]: E0715 11:34:00.564163 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:05.424615 env[1204]: time="2025-07-15T11:34:05.424573364Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:05.426506 env[1204]: time="2025-07-15T11:34:05.426460149Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:05.428274 env[1204]: time="2025-07-15T11:34:05.428232628Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:05.428778 env[1204]: time="2025-07-15T11:34:05.428742318Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 15 11:34:05.429732 env[1204]: time="2025-07-15T11:34:05.429686267Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 15 11:34:05.433813 env[1204]: time="2025-07-15T11:34:05.433781030Z" level=info msg="CreateContainer within sandbox \"464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 11:34:05.503729 env[1204]: time="2025-07-15T11:34:05.503629326Z" level=info msg="CreateContainer within sandbox \"464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b847827ea6ca019ef32d9f30ee1b80fcd6eb7fc455bbe3b4cc331441bdc36721\"" Jul 15 11:34:05.504144 env[1204]: time="2025-07-15T11:34:05.504105584Z" level=info msg="StartContainer for \"b847827ea6ca019ef32d9f30ee1b80fcd6eb7fc455bbe3b4cc331441bdc36721\"" Jul 15 11:34:05.519651 systemd[1]: Started cri-containerd-b847827ea6ca019ef32d9f30ee1b80fcd6eb7fc455bbe3b4cc331441bdc36721.scope. Jul 15 11:34:05.547558 systemd[1]: cri-containerd-b847827ea6ca019ef32d9f30ee1b80fcd6eb7fc455bbe3b4cc331441bdc36721.scope: Deactivated successfully. Jul 15 11:34:05.576148 env[1204]: time="2025-07-15T11:34:05.576078832Z" level=info msg="StartContainer for \"b847827ea6ca019ef32d9f30ee1b80fcd6eb7fc455bbe3b4cc331441bdc36721\" returns successfully" Jul 15 11:34:05.713627 env[1204]: time="2025-07-15T11:34:05.713493154Z" level=info msg="shim disconnected" id=b847827ea6ca019ef32d9f30ee1b80fcd6eb7fc455bbe3b4cc331441bdc36721 Jul 15 11:34:05.713627 env[1204]: time="2025-07-15T11:34:05.713547065Z" level=warning msg="cleaning up after shim disconnected" id=b847827ea6ca019ef32d9f30ee1b80fcd6eb7fc455bbe3b4cc331441bdc36721 namespace=k8s.io Jul 15 11:34:05.713627 env[1204]: time="2025-07-15T11:34:05.713557354Z" level=info msg="cleaning up dead shim" Jul 15 11:34:05.719617 env[1204]: time="2025-07-15T11:34:05.719585962Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:34:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2368 runtime=io.containerd.runc.v2\n" Jul 15 11:34:06.048145 kubelet[1922]: E0715 11:34:06.048123 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:06.054160 env[1204]: time="2025-07-15T11:34:06.054119181Z" level=info msg="CreateContainer within sandbox \"464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 11:34:06.068209 env[1204]: time="2025-07-15T11:34:06.068150907Z" level=info msg="CreateContainer within sandbox \"464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"51019c878a895dc6e5ab7667f182933740be8ab8edb4eca0fe36e73d5d215054\"" Jul 15 11:34:06.068644 env[1204]: time="2025-07-15T11:34:06.068620031Z" level=info msg="StartContainer for \"51019c878a895dc6e5ab7667f182933740be8ab8edb4eca0fe36e73d5d215054\"" Jul 15 11:34:06.082696 systemd[1]: Started cri-containerd-51019c878a895dc6e5ab7667f182933740be8ab8edb4eca0fe36e73d5d215054.scope. Jul 15 11:34:06.103148 env[1204]: time="2025-07-15T11:34:06.103093345Z" level=info msg="StartContainer for \"51019c878a895dc6e5ab7667f182933740be8ab8edb4eca0fe36e73d5d215054\" returns successfully" Jul 15 11:34:06.111576 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 11:34:06.111810 systemd[1]: Stopped systemd-sysctl.service. Jul 15 11:34:06.112190 systemd[1]: Stopping systemd-sysctl.service... Jul 15 11:34:06.113428 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:34:06.113651 systemd[1]: cri-containerd-51019c878a895dc6e5ab7667f182933740be8ab8edb4eca0fe36e73d5d215054.scope: Deactivated successfully. Jul 15 11:34:06.119859 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:34:06.132658 env[1204]: time="2025-07-15T11:34:06.132602460Z" level=info msg="shim disconnected" id=51019c878a895dc6e5ab7667f182933740be8ab8edb4eca0fe36e73d5d215054 Jul 15 11:34:06.132658 env[1204]: time="2025-07-15T11:34:06.132645391Z" level=warning msg="cleaning up after shim disconnected" id=51019c878a895dc6e5ab7667f182933740be8ab8edb4eca0fe36e73d5d215054 namespace=k8s.io Jul 15 11:34:06.132658 env[1204]: time="2025-07-15T11:34:06.132654318Z" level=info msg="cleaning up dead shim" Jul 15 11:34:06.138995 env[1204]: time="2025-07-15T11:34:06.138965473Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:34:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2428 runtime=io.containerd.runc.v2\n" Jul 15 11:34:06.444185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b847827ea6ca019ef32d9f30ee1b80fcd6eb7fc455bbe3b4cc331441bdc36721-rootfs.mount: Deactivated successfully. Jul 15 11:34:07.000922 systemd[1]: Started sshd@5-10.0.0.91:22-10.0.0.1:36928.service. Jul 15 11:34:07.029046 sshd[2441]: Accepted publickey for core from 10.0.0.1 port 36928 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:07.029998 sshd[2441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:07.033777 systemd-logind[1189]: New session 6 of user core. Jul 15 11:34:07.034729 systemd[1]: Started session-6.scope. Jul 15 11:34:07.056519 kubelet[1922]: E0715 11:34:07.056188 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:07.072123 env[1204]: time="2025-07-15T11:34:07.072058922Z" level=info msg="CreateContainer within sandbox \"464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 11:34:07.096042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4263589688.mount: Deactivated successfully. Jul 15 11:34:07.104118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4007571741.mount: Deactivated successfully. Jul 15 11:34:07.108219 env[1204]: time="2025-07-15T11:34:07.108103286Z" level=info msg="CreateContainer within sandbox \"464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"099a49f62d93cd37a007b5dd324d1d456933c7211410a07552d661968aa03835\"" Jul 15 11:34:07.109941 env[1204]: time="2025-07-15T11:34:07.109910950Z" level=info msg="StartContainer for \"099a49f62d93cd37a007b5dd324d1d456933c7211410a07552d661968aa03835\"" Jul 15 11:34:07.133191 systemd[1]: Started cri-containerd-099a49f62d93cd37a007b5dd324d1d456933c7211410a07552d661968aa03835.scope. Jul 15 11:34:07.200955 systemd[1]: cri-containerd-099a49f62d93cd37a007b5dd324d1d456933c7211410a07552d661968aa03835.scope: Deactivated successfully. Jul 15 11:34:07.205003 env[1204]: time="2025-07-15T11:34:07.204941854Z" level=info msg="StartContainer for \"099a49f62d93cd37a007b5dd324d1d456933c7211410a07552d661968aa03835\" returns successfully" Jul 15 11:34:07.213614 sshd[2441]: pam_unix(sshd:session): session closed for user core Jul 15 11:34:07.216327 systemd[1]: sshd@5-10.0.0.91:22-10.0.0.1:36928.service: Deactivated successfully. Jul 15 11:34:07.217084 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 11:34:07.217768 systemd-logind[1189]: Session 6 logged out. Waiting for processes to exit. Jul 15 11:34:07.218496 systemd-logind[1189]: Removed session 6. Jul 15 11:34:07.229786 env[1204]: time="2025-07-15T11:34:07.229727170Z" level=info msg="shim disconnected" id=099a49f62d93cd37a007b5dd324d1d456933c7211410a07552d661968aa03835 Jul 15 11:34:07.229786 env[1204]: time="2025-07-15T11:34:07.229780190Z" level=warning msg="cleaning up after shim disconnected" id=099a49f62d93cd37a007b5dd324d1d456933c7211410a07552d661968aa03835 namespace=k8s.io Jul 15 11:34:07.229786 env[1204]: time="2025-07-15T11:34:07.229789287Z" level=info msg="cleaning up dead shim" Jul 15 11:34:07.235531 env[1204]: time="2025-07-15T11:34:07.235490902Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:34:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2503 runtime=io.containerd.runc.v2\n" Jul 15 11:34:07.930480 env[1204]: time="2025-07-15T11:34:07.930429823Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:07.932482 env[1204]: time="2025-07-15T11:34:07.932427956Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:07.933981 env[1204]: time="2025-07-15T11:34:07.933938270Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:07.934348 env[1204]: time="2025-07-15T11:34:07.934312875Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 15 11:34:07.938738 env[1204]: time="2025-07-15T11:34:07.938710164Z" level=info msg="CreateContainer within sandbox \"b3991126d789e1f2c23c89f3778b13f1f3899bd031c7a42c4d1f2a2457a02b2b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 15 11:34:07.949794 env[1204]: time="2025-07-15T11:34:07.949757914Z" level=info msg="CreateContainer within sandbox \"b3991126d789e1f2c23c89f3778b13f1f3899bd031c7a42c4d1f2a2457a02b2b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575\"" Jul 15 11:34:07.950280 env[1204]: time="2025-07-15T11:34:07.950219424Z" level=info msg="StartContainer for \"f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575\"" Jul 15 11:34:07.965079 systemd[1]: Started cri-containerd-f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575.scope. Jul 15 11:34:08.255695 env[1204]: time="2025-07-15T11:34:08.255628985Z" level=info msg="StartContainer for \"f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575\" returns successfully" Jul 15 11:34:08.258348 kubelet[1922]: E0715 11:34:08.258321 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:08.260662 kubelet[1922]: E0715 11:34:08.260626 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:08.266060 env[1204]: time="2025-07-15T11:34:08.266027017Z" level=info msg="CreateContainer within sandbox \"464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 11:34:08.273771 kubelet[1922]: I0715 11:34:08.273710 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jk879" podStartSLOduration=1.483660995 podStartE2EDuration="17.273697617s" podCreationTimestamp="2025-07-15 11:33:51 +0000 UTC" firstStartedPulling="2025-07-15 11:33:52.145001217 +0000 UTC m=+7.219283014" lastFinishedPulling="2025-07-15 11:34:07.935037839 +0000 UTC m=+23.009319636" observedRunningTime="2025-07-15 11:34:08.272780079 +0000 UTC m=+23.347061866" watchObservedRunningTime="2025-07-15 11:34:08.273697617 +0000 UTC m=+23.347979414" Jul 15 11:34:08.280391 env[1204]: time="2025-07-15T11:34:08.280340971Z" level=info msg="CreateContainer within sandbox \"464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c409bfed3718aa5db98d481db774138b7b1ca85000d82abfe07cc0752f7f1b8d\"" Jul 15 11:34:08.280811 env[1204]: time="2025-07-15T11:34:08.280759098Z" level=info msg="StartContainer for \"c409bfed3718aa5db98d481db774138b7b1ca85000d82abfe07cc0752f7f1b8d\"" Jul 15 11:34:08.307224 systemd[1]: Started cri-containerd-c409bfed3718aa5db98d481db774138b7b1ca85000d82abfe07cc0752f7f1b8d.scope. Jul 15 11:34:08.333939 env[1204]: time="2025-07-15T11:34:08.333902200Z" level=info msg="StartContainer for \"c409bfed3718aa5db98d481db774138b7b1ca85000d82abfe07cc0752f7f1b8d\" returns successfully" Jul 15 11:34:08.335683 systemd[1]: cri-containerd-c409bfed3718aa5db98d481db774138b7b1ca85000d82abfe07cc0752f7f1b8d.scope: Deactivated successfully. Jul 15 11:34:08.359637 env[1204]: time="2025-07-15T11:34:08.359575295Z" level=info msg="shim disconnected" id=c409bfed3718aa5db98d481db774138b7b1ca85000d82abfe07cc0752f7f1b8d Jul 15 11:34:08.359637 env[1204]: time="2025-07-15T11:34:08.359626691Z" level=warning msg="cleaning up after shim disconnected" id=c409bfed3718aa5db98d481db774138b7b1ca85000d82abfe07cc0752f7f1b8d namespace=k8s.io Jul 15 11:34:08.359637 env[1204]: time="2025-07-15T11:34:08.359634937Z" level=info msg="cleaning up dead shim" Jul 15 11:34:08.377311 env[1204]: time="2025-07-15T11:34:08.377263800Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:34:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2598 runtime=io.containerd.runc.v2\n" Jul 15 11:34:08.444075 systemd[1]: run-containerd-runc-k8s.io-f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575-runc.u6tJcV.mount: Deactivated successfully. Jul 15 11:34:09.266927 kubelet[1922]: E0715 11:34:09.266863 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:09.266927 kubelet[1922]: E0715 11:34:09.266900 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:09.271376 env[1204]: time="2025-07-15T11:34:09.271338926Z" level=info msg="CreateContainer within sandbox \"464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 11:34:09.293509 env[1204]: time="2025-07-15T11:34:09.293455286Z" level=info msg="CreateContainer within sandbox \"464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392\"" Jul 15 11:34:09.294020 env[1204]: time="2025-07-15T11:34:09.293986024Z" level=info msg="StartContainer for \"16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392\"" Jul 15 11:34:09.306652 systemd[1]: Started cri-containerd-16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392.scope. Jul 15 11:34:09.332524 env[1204]: time="2025-07-15T11:34:09.332477009Z" level=info msg="StartContainer for \"16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392\" returns successfully" Jul 15 11:34:09.409557 kubelet[1922]: I0715 11:34:09.409516 1922 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 11:34:09.487490 systemd[1]: Created slice kubepods-burstable-pod45db8266_5ad4_4211_8a03_fcf82e4eefcf.slice. Jul 15 11:34:09.494211 systemd[1]: Created slice kubepods-burstable-pod7e6dbe28_4ced_4632_b9e8_92838dea47f4.slice. Jul 15 11:34:09.558303 kubelet[1922]: I0715 11:34:09.558170 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e6dbe28-4ced-4632-b9e8-92838dea47f4-config-volume\") pod \"coredns-674b8bbfcf-tnpml\" (UID: \"7e6dbe28-4ced-4632-b9e8-92838dea47f4\") " pod="kube-system/coredns-674b8bbfcf-tnpml" Jul 15 11:34:09.558303 kubelet[1922]: I0715 11:34:09.558202 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45db8266-5ad4-4211-8a03-fcf82e4eefcf-config-volume\") pod \"coredns-674b8bbfcf-svlnq\" (UID: \"45db8266-5ad4-4211-8a03-fcf82e4eefcf\") " pod="kube-system/coredns-674b8bbfcf-svlnq" Jul 15 11:34:09.558303 kubelet[1922]: I0715 11:34:09.558234 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wwss\" (UniqueName: \"kubernetes.io/projected/7e6dbe28-4ced-4632-b9e8-92838dea47f4-kube-api-access-2wwss\") pod \"coredns-674b8bbfcf-tnpml\" (UID: \"7e6dbe28-4ced-4632-b9e8-92838dea47f4\") " pod="kube-system/coredns-674b8bbfcf-tnpml" Jul 15 11:34:09.558303 kubelet[1922]: I0715 11:34:09.558252 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4gx9\" (UniqueName: \"kubernetes.io/projected/45db8266-5ad4-4211-8a03-fcf82e4eefcf-kube-api-access-n4gx9\") pod \"coredns-674b8bbfcf-svlnq\" (UID: \"45db8266-5ad4-4211-8a03-fcf82e4eefcf\") " pod="kube-system/coredns-674b8bbfcf-svlnq" Jul 15 11:34:09.790960 kubelet[1922]: E0715 11:34:09.790902 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:09.791542 env[1204]: time="2025-07-15T11:34:09.791506386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-svlnq,Uid:45db8266-5ad4-4211-8a03-fcf82e4eefcf,Namespace:kube-system,Attempt:0,}" Jul 15 11:34:09.798092 kubelet[1922]: E0715 11:34:09.798061 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:09.798566 env[1204]: time="2025-07-15T11:34:09.798522219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tnpml,Uid:7e6dbe28-4ced-4632-b9e8-92838dea47f4,Namespace:kube-system,Attempt:0,}" Jul 15 11:34:10.269054 kubelet[1922]: E0715 11:34:10.269022 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:10.283447 kubelet[1922]: I0715 11:34:10.283375 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-86vm8" podStartSLOduration=5.886021744 podStartE2EDuration="19.283356818s" podCreationTimestamp="2025-07-15 11:33:51 +0000 UTC" firstStartedPulling="2025-07-15 11:33:52.032208935 +0000 UTC m=+7.106490732" lastFinishedPulling="2025-07-15 11:34:05.429544019 +0000 UTC m=+20.503825806" observedRunningTime="2025-07-15 11:34:10.282514023 +0000 UTC m=+25.356795820" watchObservedRunningTime="2025-07-15 11:34:10.283356818 +0000 UTC m=+25.357638615" Jul 15 11:34:11.271176 kubelet[1922]: E0715 11:34:11.271144 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:12.059670 systemd-networkd[1033]: cilium_host: Link UP Jul 15 11:34:12.059781 systemd-networkd[1033]: cilium_net: Link UP Jul 15 11:34:12.062483 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 15 11:34:12.062537 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 15 11:34:12.060852 systemd-networkd[1033]: cilium_net: Gained carrier Jul 15 11:34:12.061957 systemd-networkd[1033]: cilium_host: Gained carrier Jul 15 11:34:12.131430 systemd-networkd[1033]: cilium_vxlan: Link UP Jul 15 11:34:12.131439 systemd-networkd[1033]: cilium_vxlan: Gained carrier Jul 15 11:34:12.216637 systemd[1]: Started sshd@6-10.0.0.91:22-10.0.0.1:50120.service. Jul 15 11:34:12.246390 sshd[2861]: Accepted publickey for core from 10.0.0.1 port 50120 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:12.247460 sshd[2861]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:12.251089 systemd-logind[1189]: New session 7 of user core. Jul 15 11:34:12.251323 systemd[1]: Started session-7.scope. Jul 15 11:34:12.272173 kubelet[1922]: E0715 11:34:12.272149 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:12.316910 kernel: NET: Registered PF_ALG protocol family Jul 15 11:34:12.361596 sshd[2861]: pam_unix(sshd:session): session closed for user core Jul 15 11:34:12.363649 systemd[1]: sshd@6-10.0.0.91:22-10.0.0.1:50120.service: Deactivated successfully. Jul 15 11:34:12.364284 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 11:34:12.364875 systemd-logind[1189]: Session 7 logged out. Waiting for processes to exit. Jul 15 11:34:12.365596 systemd-logind[1189]: Removed session 7. Jul 15 11:34:12.674985 systemd-networkd[1033]: cilium_net: Gained IPv6LL Jul 15 11:34:12.800919 systemd-networkd[1033]: lxc_health: Link UP Jul 15 11:34:12.808919 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 15 11:34:12.809372 systemd-networkd[1033]: lxc_health: Gained carrier Jul 15 11:34:12.994028 systemd-networkd[1033]: cilium_host: Gained IPv6LL Jul 15 11:34:13.334152 systemd-networkd[1033]: lxc45c25e070edc: Link UP Jul 15 11:34:13.340967 kernel: eth0: renamed from tmpc7aad Jul 15 11:34:13.346575 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 15 11:34:13.346681 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc45c25e070edc: link becomes ready Jul 15 11:34:13.346726 systemd-networkd[1033]: lxc45c25e070edc: Gained carrier Jul 15 11:34:13.348066 systemd-networkd[1033]: lxccb96a6c9f392: Link UP Jul 15 11:34:13.359935 kernel: eth0: renamed from tmp1e8e7 Jul 15 11:34:13.367604 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccb96a6c9f392: link becomes ready Jul 15 11:34:13.365671 systemd-networkd[1033]: lxccb96a6c9f392: Gained carrier Jul 15 11:34:13.708096 systemd-networkd[1033]: cilium_vxlan: Gained IPv6LL Jul 15 11:34:13.974584 kubelet[1922]: E0715 11:34:13.974136 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:14.274968 kubelet[1922]: E0715 11:34:14.274942 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:14.412064 systemd-networkd[1033]: lxc_health: Gained IPv6LL Jul 15 11:34:14.658090 systemd-networkd[1033]: lxccb96a6c9f392: Gained IPv6LL Jul 15 11:34:15.042044 systemd-networkd[1033]: lxc45c25e070edc: Gained IPv6LL Jul 15 11:34:15.277410 kubelet[1922]: E0715 11:34:15.277386 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:16.614399 env[1204]: time="2025-07-15T11:34:16.614326207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:34:16.614399 env[1204]: time="2025-07-15T11:34:16.614374287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:34:16.614399 env[1204]: time="2025-07-15T11:34:16.614389005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:34:16.614791 env[1204]: time="2025-07-15T11:34:16.614601935Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7aadbfc5c011be7dde87dfcf47fe1c17104c0a725022fd176fa8bc4519dbd49 pid=3180 runtime=io.containerd.runc.v2 Jul 15 11:34:16.620983 env[1204]: time="2025-07-15T11:34:16.620020890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:34:16.620983 env[1204]: time="2025-07-15T11:34:16.620066144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:34:16.620983 env[1204]: time="2025-07-15T11:34:16.620106620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:34:16.620983 env[1204]: time="2025-07-15T11:34:16.620332214Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1e8e7bee5e4b321dac1e02c24ed792fe08fc4e3f4f514f452b0aa4229b2325cf pid=3195 runtime=io.containerd.runc.v2 Jul 15 11:34:16.631024 systemd[1]: Started cri-containerd-c7aadbfc5c011be7dde87dfcf47fe1c17104c0a725022fd176fa8bc4519dbd49.scope. Jul 15 11:34:16.641558 systemd[1]: Started cri-containerd-1e8e7bee5e4b321dac1e02c24ed792fe08fc4e3f4f514f452b0aa4229b2325cf.scope. Jul 15 11:34:16.644802 systemd-resolved[1141]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:34:16.651965 systemd-resolved[1141]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:34:16.669593 env[1204]: time="2025-07-15T11:34:16.668976825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-svlnq,Uid:45db8266-5ad4-4211-8a03-fcf82e4eefcf,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7aadbfc5c011be7dde87dfcf47fe1c17104c0a725022fd176fa8bc4519dbd49\"" Jul 15 11:34:16.670074 kubelet[1922]: E0715 11:34:16.670052 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:16.677078 env[1204]: time="2025-07-15T11:34:16.677039859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tnpml,Uid:7e6dbe28-4ced-4632-b9e8-92838dea47f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e8e7bee5e4b321dac1e02c24ed792fe08fc4e3f4f514f452b0aa4229b2325cf\"" Jul 15 11:34:16.677612 kubelet[1922]: E0715 11:34:16.677592 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:16.817544 env[1204]: time="2025-07-15T11:34:16.817488242Z" level=info msg="CreateContainer within sandbox \"c7aadbfc5c011be7dde87dfcf47fe1c17104c0a725022fd176fa8bc4519dbd49\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 11:34:16.888687 env[1204]: time="2025-07-15T11:34:16.888586692Z" level=info msg="CreateContainer within sandbox \"1e8e7bee5e4b321dac1e02c24ed792fe08fc4e3f4f514f452b0aa4229b2325cf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 11:34:16.949922 env[1204]: time="2025-07-15T11:34:16.949870226Z" level=info msg="CreateContainer within sandbox \"c7aadbfc5c011be7dde87dfcf47fe1c17104c0a725022fd176fa8bc4519dbd49\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"56ee353368d93d695eed1aeeb06e0f379c3c13015114d677101ae1e0ca1e1427\"" Jul 15 11:34:16.951079 env[1204]: time="2025-07-15T11:34:16.950868412Z" level=info msg="StartContainer for \"56ee353368d93d695eed1aeeb06e0f379c3c13015114d677101ae1e0ca1e1427\"" Jul 15 11:34:16.955181 env[1204]: time="2025-07-15T11:34:16.955148586Z" level=info msg="CreateContainer within sandbox \"1e8e7bee5e4b321dac1e02c24ed792fe08fc4e3f4f514f452b0aa4229b2325cf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"daddc15717fee1075b87b47e6e6424d5f36c8c538b7174fa3b1aec4d1cd71d76\"" Jul 15 11:34:16.955608 env[1204]: time="2025-07-15T11:34:16.955581029Z" level=info msg="StartContainer for \"daddc15717fee1075b87b47e6e6424d5f36c8c538b7174fa3b1aec4d1cd71d76\"" Jul 15 11:34:16.964607 systemd[1]: Started cri-containerd-56ee353368d93d695eed1aeeb06e0f379c3c13015114d677101ae1e0ca1e1427.scope. Jul 15 11:34:16.969594 systemd[1]: Started cri-containerd-daddc15717fee1075b87b47e6e6424d5f36c8c538b7174fa3b1aec4d1cd71d76.scope. Jul 15 11:34:16.992387 env[1204]: time="2025-07-15T11:34:16.992345252Z" level=info msg="StartContainer for \"56ee353368d93d695eed1aeeb06e0f379c3c13015114d677101ae1e0ca1e1427\" returns successfully" Jul 15 11:34:16.993686 env[1204]: time="2025-07-15T11:34:16.993659191Z" level=info msg="StartContainer for \"daddc15717fee1075b87b47e6e6424d5f36c8c538b7174fa3b1aec4d1cd71d76\" returns successfully" Jul 15 11:34:17.281779 kubelet[1922]: E0715 11:34:17.281739 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:17.283174 kubelet[1922]: E0715 11:34:17.283150 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:17.292003 kubelet[1922]: I0715 11:34:17.291960 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tnpml" podStartSLOduration=26.291947645 podStartE2EDuration="26.291947645s" podCreationTimestamp="2025-07-15 11:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:34:17.29157255 +0000 UTC m=+32.365854347" watchObservedRunningTime="2025-07-15 11:34:17.291947645 +0000 UTC m=+32.366229442" Jul 15 11:34:17.299663 kubelet[1922]: I0715 11:34:17.299597 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-svlnq" podStartSLOduration=26.299580587 podStartE2EDuration="26.299580587s" podCreationTimestamp="2025-07-15 11:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:34:17.299321481 +0000 UTC m=+32.373603278" watchObservedRunningTime="2025-07-15 11:34:17.299580587 +0000 UTC m=+32.373862384" Jul 15 11:34:17.365918 systemd[1]: Started sshd@7-10.0.0.91:22-10.0.0.1:50128.service. Jul 15 11:34:17.396988 sshd[3340]: Accepted publickey for core from 10.0.0.1 port 50128 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:17.397987 sshd[3340]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:17.401113 systemd-logind[1189]: New session 8 of user core. Jul 15 11:34:17.401863 systemd[1]: Started session-8.scope. Jul 15 11:34:17.512825 sshd[3340]: pam_unix(sshd:session): session closed for user core Jul 15 11:34:17.515183 systemd[1]: sshd@7-10.0.0.91:22-10.0.0.1:50128.service: Deactivated successfully. Jul 15 11:34:17.515859 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 11:34:17.516597 systemd-logind[1189]: Session 8 logged out. Waiting for processes to exit. Jul 15 11:34:17.517198 systemd-logind[1189]: Removed session 8. Jul 15 11:34:17.621380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2138584640.mount: Deactivated successfully. Jul 15 11:34:18.285131 kubelet[1922]: E0715 11:34:18.285104 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:18.285534 kubelet[1922]: E0715 11:34:18.285258 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:19.286360 kubelet[1922]: E0715 11:34:19.286334 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:19.286722 kubelet[1922]: E0715 11:34:19.286413 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:22.516487 systemd[1]: Started sshd@8-10.0.0.91:22-10.0.0.1:44554.service. Jul 15 11:34:22.544558 sshd[3358]: Accepted publickey for core from 10.0.0.1 port 44554 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:22.545578 sshd[3358]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:22.548755 systemd-logind[1189]: New session 9 of user core. Jul 15 11:34:22.549649 systemd[1]: Started session-9.scope. Jul 15 11:34:22.650536 sshd[3358]: pam_unix(sshd:session): session closed for user core Jul 15 11:34:22.652387 systemd[1]: sshd@8-10.0.0.91:22-10.0.0.1:44554.service: Deactivated successfully. Jul 15 11:34:22.653038 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 11:34:22.653513 systemd-logind[1189]: Session 9 logged out. Waiting for processes to exit. Jul 15 11:34:22.654232 systemd-logind[1189]: Removed session 9. Jul 15 11:34:27.654729 systemd[1]: Started sshd@9-10.0.0.91:22-10.0.0.1:44564.service. Jul 15 11:34:27.687409 sshd[3373]: Accepted publickey for core from 10.0.0.1 port 44564 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:27.688390 sshd[3373]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:27.691407 systemd-logind[1189]: New session 10 of user core. Jul 15 11:34:27.692126 systemd[1]: Started session-10.scope. Jul 15 11:34:27.804428 sshd[3373]: pam_unix(sshd:session): session closed for user core Jul 15 11:34:27.806580 systemd[1]: sshd@9-10.0.0.91:22-10.0.0.1:44564.service: Deactivated successfully. Jul 15 11:34:27.807250 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 11:34:27.807958 systemd-logind[1189]: Session 10 logged out. Waiting for processes to exit. Jul 15 11:34:27.808587 systemd-logind[1189]: Removed session 10. Jul 15 11:34:32.808264 systemd[1]: Started sshd@10-10.0.0.91:22-10.0.0.1:46178.service. Jul 15 11:34:32.843196 sshd[3388]: Accepted publickey for core from 10.0.0.1 port 46178 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:32.844420 sshd[3388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:32.847719 systemd-logind[1189]: New session 11 of user core. Jul 15 11:34:32.848464 systemd[1]: Started session-11.scope. Jul 15 11:34:32.960518 sshd[3388]: pam_unix(sshd:session): session closed for user core Jul 15 11:34:32.963275 systemd[1]: sshd@10-10.0.0.91:22-10.0.0.1:46178.service: Deactivated successfully. Jul 15 11:34:32.963790 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 11:34:32.964482 systemd-logind[1189]: Session 11 logged out. Waiting for processes to exit. Jul 15 11:34:32.965635 systemd[1]: Started sshd@11-10.0.0.91:22-10.0.0.1:46182.service. Jul 15 11:34:32.966247 systemd-logind[1189]: Removed session 11. Jul 15 11:34:32.995985 sshd[3402]: Accepted publickey for core from 10.0.0.1 port 46182 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:32.996874 sshd[3402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:33.000464 systemd-logind[1189]: New session 12 of user core. Jul 15 11:34:33.001177 systemd[1]: Started session-12.scope. Jul 15 11:34:33.146651 sshd[3402]: pam_unix(sshd:session): session closed for user core Jul 15 11:34:33.150447 systemd[1]: sshd@11-10.0.0.91:22-10.0.0.1:46182.service: Deactivated successfully. Jul 15 11:34:33.151453 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 11:34:33.152034 systemd-logind[1189]: Session 12 logged out. Waiting for processes to exit. Jul 15 11:34:33.153653 systemd[1]: Started sshd@12-10.0.0.91:22-10.0.0.1:46192.service. Jul 15 11:34:33.155716 systemd-logind[1189]: Removed session 12. Jul 15 11:34:33.182357 sshd[3413]: Accepted publickey for core from 10.0.0.1 port 46192 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:33.183423 sshd[3413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:33.186592 systemd-logind[1189]: New session 13 of user core. Jul 15 11:34:33.187415 systemd[1]: Started session-13.scope. Jul 15 11:34:33.295840 sshd[3413]: pam_unix(sshd:session): session closed for user core Jul 15 11:34:33.297938 systemd[1]: sshd@12-10.0.0.91:22-10.0.0.1:46192.service: Deactivated successfully. Jul 15 11:34:33.298623 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 11:34:33.299292 systemd-logind[1189]: Session 13 logged out. Waiting for processes to exit. Jul 15 11:34:33.299948 systemd-logind[1189]: Removed session 13. Jul 15 11:34:38.300194 systemd[1]: Started sshd@13-10.0.0.91:22-10.0.0.1:46208.service. Jul 15 11:34:38.328293 sshd[3426]: Accepted publickey for core from 10.0.0.1 port 46208 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:38.329499 sshd[3426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:38.332656 systemd-logind[1189]: New session 14 of user core. Jul 15 11:34:38.333455 systemd[1]: Started session-14.scope. Jul 15 11:34:38.436021 sshd[3426]: pam_unix(sshd:session): session closed for user core Jul 15 11:34:38.438622 systemd[1]: sshd@13-10.0.0.91:22-10.0.0.1:46208.service: Deactivated successfully. Jul 15 11:34:38.439475 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 11:34:38.440278 systemd-logind[1189]: Session 14 logged out. Waiting for processes to exit. Jul 15 11:34:38.441116 systemd-logind[1189]: Removed session 14. Jul 15 11:34:43.440422 systemd[1]: Started sshd@14-10.0.0.91:22-10.0.0.1:42352.service. Jul 15 11:34:43.468247 sshd[3439]: Accepted publickey for core from 10.0.0.1 port 42352 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:43.469402 sshd[3439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:43.473006 systemd-logind[1189]: New session 15 of user core. Jul 15 11:34:43.473978 systemd[1]: Started session-15.scope. Jul 15 11:34:43.583535 sshd[3439]: pam_unix(sshd:session): session closed for user core Jul 15 11:34:43.585816 systemd[1]: sshd@14-10.0.0.91:22-10.0.0.1:42352.service: Deactivated successfully. Jul 15 11:34:43.586667 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 11:34:43.587277 systemd-logind[1189]: Session 15 logged out. Waiting for processes to exit. Jul 15 11:34:43.588015 systemd-logind[1189]: Removed session 15. Jul 15 11:34:48.587372 systemd[1]: Started sshd@15-10.0.0.91:22-10.0.0.1:42358.service. Jul 15 11:34:48.614782 sshd[3455]: Accepted publickey for core from 10.0.0.1 port 42358 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:48.615701 sshd[3455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:48.618872 systemd-logind[1189]: New session 16 of user core. Jul 15 11:34:48.619935 systemd[1]: Started session-16.scope. Jul 15 11:34:48.763685 sshd[3455]: pam_unix(sshd:session): session closed for user core Jul 15 11:34:48.766167 systemd[1]: sshd@15-10.0.0.91:22-10.0.0.1:42358.service: Deactivated successfully. Jul 15 11:34:48.766693 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 11:34:48.767200 systemd-logind[1189]: Session 16 logged out. Waiting for processes to exit. Jul 15 11:34:48.768529 systemd[1]: Started sshd@16-10.0.0.91:22-10.0.0.1:42362.service. Jul 15 11:34:48.769126 systemd-logind[1189]: Removed session 16. Jul 15 11:34:48.796875 sshd[3468]: Accepted publickey for core from 10.0.0.1 port 42362 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:48.797996 sshd[3468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:48.801142 systemd-logind[1189]: New session 17 of user core. Jul 15 11:34:48.801854 systemd[1]: Started session-17.scope. Jul 15 11:34:50.080544 sshd[3468]: pam_unix(sshd:session): session closed for user core Jul 15 11:34:50.082871 systemd[1]: sshd@16-10.0.0.91:22-10.0.0.1:42362.service: Deactivated successfully. Jul 15 11:34:50.083377 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 11:34:50.083817 systemd-logind[1189]: Session 17 logged out. Waiting for processes to exit. Jul 15 11:34:50.084584 systemd[1]: Started sshd@17-10.0.0.91:22-10.0.0.1:49858.service. Jul 15 11:34:50.085243 systemd-logind[1189]: Removed session 17. Jul 15 11:34:50.121045 sshd[3480]: Accepted publickey for core from 10.0.0.1 port 49858 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:50.122001 sshd[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:50.124983 systemd-logind[1189]: New session 18 of user core. Jul 15 11:34:50.125691 systemd[1]: Started session-18.scope. Jul 15 11:34:51.009457 sshd[3480]: pam_unix(sshd:session): session closed for user core Jul 15 11:34:51.012305 systemd[1]: sshd@17-10.0.0.91:22-10.0.0.1:49858.service: Deactivated successfully. Jul 15 11:34:51.012910 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 11:34:51.015193 systemd[1]: Started sshd@18-10.0.0.91:22-10.0.0.1:49874.service. Jul 15 11:34:51.015698 systemd-logind[1189]: Session 18 logged out. Waiting for processes to exit. Jul 15 11:34:51.016468 systemd-logind[1189]: Removed session 18. Jul 15 11:34:51.045563 sshd[3497]: Accepted publickey for core from 10.0.0.1 port 49874 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:51.047033 sshd[3497]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:51.051961 systemd-logind[1189]: New session 19 of user core. Jul 15 11:34:51.052792 systemd[1]: Started session-19.scope. Jul 15 11:34:51.309032 sshd[3497]: pam_unix(sshd:session): session closed for user core Jul 15 11:34:51.313597 systemd[1]: Started sshd@19-10.0.0.91:22-10.0.0.1:49886.service. Jul 15 11:34:51.314243 systemd[1]: sshd@18-10.0.0.91:22-10.0.0.1:49874.service: Deactivated successfully. Jul 15 11:34:51.315195 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 11:34:51.319764 systemd-logind[1189]: Session 19 logged out. Waiting for processes to exit. Jul 15 11:34:51.321967 systemd-logind[1189]: Removed session 19. Jul 15 11:34:51.348228 sshd[3508]: Accepted publickey for core from 10.0.0.1 port 49886 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:51.349363 sshd[3508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:51.352946 systemd-logind[1189]: New session 20 of user core. Jul 15 11:34:51.353760 systemd[1]: Started session-20.scope. Jul 15 11:34:51.458690 sshd[3508]: pam_unix(sshd:session): session closed for user core Jul 15 11:34:51.460945 systemd[1]: sshd@19-10.0.0.91:22-10.0.0.1:49886.service: Deactivated successfully. Jul 15 11:34:51.461585 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 11:34:51.462298 systemd-logind[1189]: Session 20 logged out. Waiting for processes to exit. Jul 15 11:34:51.462955 systemd-logind[1189]: Removed session 20. Jul 15 11:34:56.464389 systemd[1]: Started sshd@20-10.0.0.91:22-10.0.0.1:49896.service. Jul 15 11:34:56.495000 sshd[3525]: Accepted publickey for core from 10.0.0.1 port 49896 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:56.496301 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:56.499820 systemd-logind[1189]: New session 21 of user core. Jul 15 11:34:56.500612 systemd[1]: Started session-21.scope. Jul 15 11:34:56.632230 sshd[3525]: pam_unix(sshd:session): session closed for user core Jul 15 11:34:56.634603 systemd[1]: sshd@20-10.0.0.91:22-10.0.0.1:49896.service: Deactivated successfully. Jul 15 11:34:56.635433 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 11:34:56.635993 systemd-logind[1189]: Session 21 logged out. Waiting for processes to exit. Jul 15 11:34:56.636693 systemd-logind[1189]: Removed session 21. Jul 15 11:34:59.004474 kubelet[1922]: E0715 11:34:59.004432 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:01.636864 systemd[1]: Started sshd@21-10.0.0.91:22-10.0.0.1:51260.service. Jul 15 11:35:01.667048 sshd[3540]: Accepted publickey for core from 10.0.0.1 port 51260 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:01.668115 sshd[3540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:01.671568 systemd-logind[1189]: New session 22 of user core. Jul 15 11:35:01.672408 systemd[1]: Started session-22.scope. Jul 15 11:35:01.776327 sshd[3540]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:01.778670 systemd[1]: sshd@21-10.0.0.91:22-10.0.0.1:51260.service: Deactivated successfully. Jul 15 11:35:01.779493 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 11:35:01.780093 systemd-logind[1189]: Session 22 logged out. Waiting for processes to exit. Jul 15 11:35:01.780966 systemd-logind[1189]: Removed session 22. Jul 15 11:35:06.003532 kubelet[1922]: E0715 11:35:06.003472 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:06.779972 systemd[1]: Started sshd@22-10.0.0.91:22-10.0.0.1:51274.service. Jul 15 11:35:06.809209 sshd[3553]: Accepted publickey for core from 10.0.0.1 port 51274 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:06.810301 sshd[3553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:06.813398 systemd-logind[1189]: New session 23 of user core. Jul 15 11:35:06.814209 systemd[1]: Started session-23.scope. Jul 15 11:35:06.929917 sshd[3553]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:06.932461 systemd[1]: sshd@22-10.0.0.91:22-10.0.0.1:51274.service: Deactivated successfully. Jul 15 11:35:06.932949 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 11:35:06.933460 systemd-logind[1189]: Session 23 logged out. Waiting for processes to exit. Jul 15 11:35:06.934284 systemd[1]: Started sshd@23-10.0.0.91:22-10.0.0.1:51276.service. Jul 15 11:35:06.935088 systemd-logind[1189]: Removed session 23. Jul 15 11:35:06.963161 sshd[3566]: Accepted publickey for core from 10.0.0.1 port 51276 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:06.964131 sshd[3566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:06.967130 systemd-logind[1189]: New session 24 of user core. Jul 15 11:35:06.967826 systemd[1]: Started session-24.scope. Jul 15 11:35:08.348297 env[1204]: time="2025-07-15T11:35:08.348239147Z" level=info msg="StopContainer for \"f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575\" with timeout 30 (s)" Jul 15 11:35:08.348712 env[1204]: time="2025-07-15T11:35:08.348545373Z" level=info msg="Stop container \"f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575\" with signal terminated" Jul 15 11:35:08.362262 systemd[1]: run-containerd-runc-k8s.io-16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392-runc.AMXyTI.mount: Deactivated successfully. Jul 15 11:35:08.368414 systemd[1]: cri-containerd-f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575.scope: Deactivated successfully. Jul 15 11:35:08.378137 env[1204]: time="2025-07-15T11:35:08.378061470Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 11:35:08.383806 env[1204]: time="2025-07-15T11:35:08.383762063Z" level=info msg="StopContainer for \"16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392\" with timeout 2 (s)" Jul 15 11:35:08.384023 env[1204]: time="2025-07-15T11:35:08.384000199Z" level=info msg="Stop container \"16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392\" with signal terminated" Jul 15 11:35:08.389071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575-rootfs.mount: Deactivated successfully. Jul 15 11:35:08.391311 systemd-networkd[1033]: lxc_health: Link DOWN Jul 15 11:35:08.391317 systemd-networkd[1033]: lxc_health: Lost carrier Jul 15 11:35:08.395230 env[1204]: time="2025-07-15T11:35:08.395191885Z" level=info msg="shim disconnected" id=f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575 Jul 15 11:35:08.395316 env[1204]: time="2025-07-15T11:35:08.395230449Z" level=warning msg="cleaning up after shim disconnected" id=f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575 namespace=k8s.io Jul 15 11:35:08.395316 env[1204]: time="2025-07-15T11:35:08.395239035Z" level=info msg="cleaning up dead shim" Jul 15 11:35:08.401613 env[1204]: time="2025-07-15T11:35:08.401572410Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:35:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3617 runtime=io.containerd.runc.v2\n" Jul 15 11:35:08.403732 env[1204]: time="2025-07-15T11:35:08.403683822Z" level=info msg="StopContainer for \"f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575\" returns successfully" Jul 15 11:35:08.404284 env[1204]: time="2025-07-15T11:35:08.404261868Z" level=info msg="StopPodSandbox for \"b3991126d789e1f2c23c89f3778b13f1f3899bd031c7a42c4d1f2a2457a02b2b\"" Jul 15 11:35:08.404342 env[1204]: time="2025-07-15T11:35:08.404317896Z" level=info msg="Container to stop \"f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:35:08.406536 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3991126d789e1f2c23c89f3778b13f1f3899bd031c7a42c4d1f2a2457a02b2b-shm.mount: Deactivated successfully. Jul 15 11:35:08.419816 systemd[1]: cri-containerd-b3991126d789e1f2c23c89f3778b13f1f3899bd031c7a42c4d1f2a2457a02b2b.scope: Deactivated successfully. Jul 15 11:35:08.423618 systemd[1]: cri-containerd-16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392.scope: Deactivated successfully. Jul 15 11:35:08.423958 systemd[1]: cri-containerd-16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392.scope: Consumed 5.782s CPU time. Jul 15 11:35:08.450714 env[1204]: time="2025-07-15T11:35:08.450510508Z" level=info msg="shim disconnected" id=16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392 Jul 15 11:35:08.450714 env[1204]: time="2025-07-15T11:35:08.450567588Z" level=warning msg="cleaning up after shim disconnected" id=16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392 namespace=k8s.io Jul 15 11:35:08.450714 env[1204]: time="2025-07-15T11:35:08.450579611Z" level=info msg="cleaning up dead shim" Jul 15 11:35:08.451229 env[1204]: time="2025-07-15T11:35:08.451175803Z" level=info msg="shim disconnected" id=b3991126d789e1f2c23c89f3778b13f1f3899bd031c7a42c4d1f2a2457a02b2b Jul 15 11:35:08.451285 env[1204]: time="2025-07-15T11:35:08.451234024Z" level=warning msg="cleaning up after shim disconnected" id=b3991126d789e1f2c23c89f3778b13f1f3899bd031c7a42c4d1f2a2457a02b2b namespace=k8s.io Jul 15 11:35:08.451285 env[1204]: time="2025-07-15T11:35:08.451250575Z" level=info msg="cleaning up dead shim" Jul 15 11:35:08.458811 env[1204]: time="2025-07-15T11:35:08.458753449Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:35:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3662 runtime=io.containerd.runc.v2\n" Jul 15 11:35:08.459610 env[1204]: time="2025-07-15T11:35:08.459570944Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:35:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3663 runtime=io.containerd.runc.v2\n" Jul 15 11:35:08.459926 env[1204]: time="2025-07-15T11:35:08.459898311Z" level=info msg="TearDown network for sandbox \"b3991126d789e1f2c23c89f3778b13f1f3899bd031c7a42c4d1f2a2457a02b2b\" successfully" Jul 15 11:35:08.459981 env[1204]: time="2025-07-15T11:35:08.459925703Z" level=info msg="StopPodSandbox for \"b3991126d789e1f2c23c89f3778b13f1f3899bd031c7a42c4d1f2a2457a02b2b\" returns successfully" Jul 15 11:35:08.461306 env[1204]: time="2025-07-15T11:35:08.461271780Z" level=info msg="StopContainer for \"16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392\" returns successfully" Jul 15 11:35:08.462122 env[1204]: time="2025-07-15T11:35:08.462076330Z" level=info msg="StopPodSandbox for \"464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07\"" Jul 15 11:35:08.462283 env[1204]: time="2025-07-15T11:35:08.462174789Z" level=info msg="Container to stop \"b847827ea6ca019ef32d9f30ee1b80fcd6eb7fc455bbe3b4cc331441bdc36721\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:35:08.462283 env[1204]: time="2025-07-15T11:35:08.462206489Z" level=info msg="Container to stop \"c409bfed3718aa5db98d481db774138b7b1ca85000d82abfe07cc0752f7f1b8d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:35:08.462283 env[1204]: time="2025-07-15T11:35:08.462221168Z" level=info msg="Container to stop \"51019c878a895dc6e5ab7667f182933740be8ab8edb4eca0fe36e73d5d215054\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:35:08.462283 env[1204]: time="2025-07-15T11:35:08.462233080Z" level=info msg="Container to stop \"099a49f62d93cd37a007b5dd324d1d456933c7211410a07552d661968aa03835\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:35:08.462283 env[1204]: time="2025-07-15T11:35:08.462246647Z" level=info msg="Container to stop \"16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:35:08.468071 systemd[1]: cri-containerd-464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07.scope: Deactivated successfully. Jul 15 11:35:08.510131 env[1204]: time="2025-07-15T11:35:08.510056364Z" level=info msg="shim disconnected" id=464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07 Jul 15 11:35:08.510439 env[1204]: time="2025-07-15T11:35:08.510390615Z" level=warning msg="cleaning up after shim disconnected" id=464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07 namespace=k8s.io Jul 15 11:35:08.510439 env[1204]: time="2025-07-15T11:35:08.510413879Z" level=info msg="cleaning up dead shim" Jul 15 11:35:08.519228 env[1204]: time="2025-07-15T11:35:08.519170703Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:35:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3705 runtime=io.containerd.runc.v2\n" Jul 15 11:35:08.519592 env[1204]: time="2025-07-15T11:35:08.519562503Z" level=info msg="TearDown network for sandbox \"464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07\" successfully" Jul 15 11:35:08.519646 env[1204]: time="2025-07-15T11:35:08.519592751Z" level=info msg="StopPodSandbox for \"464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07\" returns successfully" Jul 15 11:35:08.614338 kubelet[1922]: I0715 11:35:08.613354 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-cilium-run\") pod \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " Jul 15 11:35:08.614338 kubelet[1922]: I0715 11:35:08.613413 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-cilium-config-path\") pod \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " Jul 15 11:35:08.614338 kubelet[1922]: I0715 11:35:08.613444 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-hostproc\") pod \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " Jul 15 11:35:08.614338 kubelet[1922]: I0715 11:35:08.613464 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-host-proc-sys-net\") pod \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " Jul 15 11:35:08.614338 kubelet[1922]: I0715 11:35:08.613492 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e12f1a0d-794c-4eee-bb98-18645a855876-cilium-config-path\") pod \"e12f1a0d-794c-4eee-bb98-18645a855876\" (UID: \"e12f1a0d-794c-4eee-bb98-18645a855876\") " Jul 15 11:35:08.614338 kubelet[1922]: I0715 11:35:08.613493 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8c3f01ce-9388-41a4-9a1d-b6b5740073e9" (UID: "8c3f01ce-9388-41a4-9a1d-b6b5740073e9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:35:08.615022 kubelet[1922]: I0715 11:35:08.613549 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8c3f01ce-9388-41a4-9a1d-b6b5740073e9" (UID: "8c3f01ce-9388-41a4-9a1d-b6b5740073e9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:35:08.615022 kubelet[1922]: I0715 11:35:08.613556 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-etc-cni-netd\") pod \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " Jul 15 11:35:08.615022 kubelet[1922]: I0715 11:35:08.613493 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-hostproc" (OuterVolumeSpecName: "hostproc") pod "8c3f01ce-9388-41a4-9a1d-b6b5740073e9" (UID: "8c3f01ce-9388-41a4-9a1d-b6b5740073e9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:35:08.615022 kubelet[1922]: I0715 11:35:08.613582 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-lib-modules\") pod \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " Jul 15 11:35:08.615022 kubelet[1922]: I0715 11:35:08.613591 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8c3f01ce-9388-41a4-9a1d-b6b5740073e9" (UID: "8c3f01ce-9388-41a4-9a1d-b6b5740073e9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:35:08.615190 kubelet[1922]: I0715 11:35:08.613610 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkpps\" (UniqueName: \"kubernetes.io/projected/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-kube-api-access-xkpps\") pod \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " Jul 15 11:35:08.615190 kubelet[1922]: I0715 11:35:08.613632 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-cni-path\") pod \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " Jul 15 11:35:08.615190 kubelet[1922]: I0715 11:35:08.613653 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db84q\" (UniqueName: \"kubernetes.io/projected/e12f1a0d-794c-4eee-bb98-18645a855876-kube-api-access-db84q\") pod \"e12f1a0d-794c-4eee-bb98-18645a855876\" (UID: \"e12f1a0d-794c-4eee-bb98-18645a855876\") " Jul 15 11:35:08.615190 kubelet[1922]: I0715 11:35:08.613673 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-cilium-cgroup\") pod \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " Jul 15 11:35:08.615190 kubelet[1922]: I0715 11:35:08.613612 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8c3f01ce-9388-41a4-9a1d-b6b5740073e9" (UID: "8c3f01ce-9388-41a4-9a1d-b6b5740073e9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:35:08.615190 kubelet[1922]: I0715 11:35:08.613718 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-xtables-lock\") pod \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " Jul 15 11:35:08.615512 kubelet[1922]: I0715 11:35:08.613740 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-bpf-maps\") pod \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " Jul 15 11:35:08.615512 kubelet[1922]: I0715 11:35:08.613762 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-clustermesh-secrets\") pod \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " Jul 15 11:35:08.615512 kubelet[1922]: I0715 11:35:08.613780 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-host-proc-sys-kernel\") pod \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " Jul 15 11:35:08.615512 kubelet[1922]: I0715 11:35:08.613801 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-hubble-tls\") pod \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\" (UID: \"8c3f01ce-9388-41a4-9a1d-b6b5740073e9\") " Jul 15 11:35:08.615512 kubelet[1922]: I0715 11:35:08.613840 1922 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:08.615512 kubelet[1922]: I0715 11:35:08.613854 1922 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:08.615512 kubelet[1922]: I0715 11:35:08.613865 1922 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:08.615742 kubelet[1922]: I0715 11:35:08.613907 1922 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:08.615742 kubelet[1922]: I0715 11:35:08.613919 1922 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:08.616120 kubelet[1922]: I0715 11:35:08.615873 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-cni-path" (OuterVolumeSpecName: "cni-path") pod "8c3f01ce-9388-41a4-9a1d-b6b5740073e9" (UID: "8c3f01ce-9388-41a4-9a1d-b6b5740073e9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:35:08.616649 kubelet[1922]: I0715 11:35:08.616627 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e12f1a0d-794c-4eee-bb98-18645a855876-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e12f1a0d-794c-4eee-bb98-18645a855876" (UID: "e12f1a0d-794c-4eee-bb98-18645a855876"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 11:35:08.616773 kubelet[1922]: I0715 11:35:08.616753 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8c3f01ce-9388-41a4-9a1d-b6b5740073e9" (UID: "8c3f01ce-9388-41a4-9a1d-b6b5740073e9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:35:08.616900 kubelet[1922]: I0715 11:35:08.616866 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8c3f01ce-9388-41a4-9a1d-b6b5740073e9" (UID: "8c3f01ce-9388-41a4-9a1d-b6b5740073e9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:35:08.617026 kubelet[1922]: I0715 11:35:08.617004 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8c3f01ce-9388-41a4-9a1d-b6b5740073e9" (UID: "8c3f01ce-9388-41a4-9a1d-b6b5740073e9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:35:08.617616 kubelet[1922]: I0715 11:35:08.617575 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8c3f01ce-9388-41a4-9a1d-b6b5740073e9" (UID: "8c3f01ce-9388-41a4-9a1d-b6b5740073e9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 11:35:08.617677 kubelet[1922]: I0715 11:35:08.617631 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8c3f01ce-9388-41a4-9a1d-b6b5740073e9" (UID: "8c3f01ce-9388-41a4-9a1d-b6b5740073e9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:35:08.618513 kubelet[1922]: I0715 11:35:08.618449 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-kube-api-access-xkpps" (OuterVolumeSpecName: "kube-api-access-xkpps") pod "8c3f01ce-9388-41a4-9a1d-b6b5740073e9" (UID: "8c3f01ce-9388-41a4-9a1d-b6b5740073e9"). InnerVolumeSpecName "kube-api-access-xkpps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 11:35:08.619020 kubelet[1922]: I0715 11:35:08.618974 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e12f1a0d-794c-4eee-bb98-18645a855876-kube-api-access-db84q" (OuterVolumeSpecName: "kube-api-access-db84q") pod "e12f1a0d-794c-4eee-bb98-18645a855876" (UID: "e12f1a0d-794c-4eee-bb98-18645a855876"). InnerVolumeSpecName "kube-api-access-db84q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 11:35:08.619571 kubelet[1922]: I0715 11:35:08.619537 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8c3f01ce-9388-41a4-9a1d-b6b5740073e9" (UID: "8c3f01ce-9388-41a4-9a1d-b6b5740073e9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 11:35:08.620376 kubelet[1922]: I0715 11:35:08.620350 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8c3f01ce-9388-41a4-9a1d-b6b5740073e9" (UID: "8c3f01ce-9388-41a4-9a1d-b6b5740073e9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 11:35:08.714665 kubelet[1922]: I0715 11:35:08.714593 1922 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e12f1a0d-794c-4eee-bb98-18645a855876-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:08.714665 kubelet[1922]: I0715 11:35:08.714646 1922 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xkpps\" (UniqueName: \"kubernetes.io/projected/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-kube-api-access-xkpps\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:08.714665 kubelet[1922]: I0715 11:35:08.714660 1922 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:08.714665 kubelet[1922]: I0715 11:35:08.714673 1922 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-db84q\" (UniqueName: \"kubernetes.io/projected/e12f1a0d-794c-4eee-bb98-18645a855876-kube-api-access-db84q\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:08.714665 kubelet[1922]: I0715 11:35:08.714684 1922 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:08.715069 kubelet[1922]: I0715 11:35:08.714695 1922 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:08.715069 kubelet[1922]: I0715 11:35:08.714706 1922 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:08.715069 kubelet[1922]: I0715 11:35:08.714717 1922 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:08.715069 kubelet[1922]: I0715 11:35:08.714726 1922 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:08.715069 kubelet[1922]: I0715 11:35:08.714737 1922 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:08.715069 kubelet[1922]: I0715 11:35:08.714746 1922 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c3f01ce-9388-41a4-9a1d-b6b5740073e9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:09.009738 systemd[1]: Removed slice kubepods-besteffort-pode12f1a0d_794c_4eee_bb98_18645a855876.slice. Jul 15 11:35:09.010987 systemd[1]: Removed slice kubepods-burstable-pod8c3f01ce_9388_41a4_9a1d_b6b5740073e9.slice. Jul 15 11:35:09.011072 systemd[1]: kubepods-burstable-pod8c3f01ce_9388_41a4_9a1d_b6b5740073e9.slice: Consumed 5.867s CPU time. Jul 15 11:35:09.357036 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392-rootfs.mount: Deactivated successfully. Jul 15 11:35:09.357137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3991126d789e1f2c23c89f3778b13f1f3899bd031c7a42c4d1f2a2457a02b2b-rootfs.mount: Deactivated successfully. Jul 15 11:35:09.357194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07-rootfs.mount: Deactivated successfully. Jul 15 11:35:09.357244 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-464a6a87ed53425ee276e9028e81cd2368fa425194c8bd6cfb1fe0623f73cc07-shm.mount: Deactivated successfully. Jul 15 11:35:09.357303 systemd[1]: var-lib-kubelet-pods-8c3f01ce\x2d9388\x2d41a4\x2d9a1d\x2db6b5740073e9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxkpps.mount: Deactivated successfully. Jul 15 11:35:09.357362 systemd[1]: var-lib-kubelet-pods-e12f1a0d\x2d794c\x2d4eee\x2dbb98\x2d18645a855876-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddb84q.mount: Deactivated successfully. Jul 15 11:35:09.357416 systemd[1]: var-lib-kubelet-pods-8c3f01ce\x2d9388\x2d41a4\x2d9a1d\x2db6b5740073e9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 11:35:09.357465 systemd[1]: var-lib-kubelet-pods-8c3f01ce\x2d9388\x2d41a4\x2d9a1d\x2db6b5740073e9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 11:35:09.374536 kubelet[1922]: I0715 11:35:09.374495 1922 scope.go:117] "RemoveContainer" containerID="f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575" Jul 15 11:35:09.376662 env[1204]: time="2025-07-15T11:35:09.376395688Z" level=info msg="RemoveContainer for \"f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575\"" Jul 15 11:35:09.380769 env[1204]: time="2025-07-15T11:35:09.380720964Z" level=info msg="RemoveContainer for \"f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575\" returns successfully" Jul 15 11:35:09.381187 kubelet[1922]: I0715 11:35:09.381133 1922 scope.go:117] "RemoveContainer" containerID="f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575" Jul 15 11:35:09.381542 env[1204]: time="2025-07-15T11:35:09.381442474Z" level=error msg="ContainerStatus for \"f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575\": not found" Jul 15 11:35:09.381738 kubelet[1922]: E0715 11:35:09.381705 1922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575\": not found" containerID="f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575" Jul 15 11:35:09.381786 kubelet[1922]: I0715 11:35:09.381748 1922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575"} err="failed to get container status \"f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575\": rpc error: code = NotFound desc = an error occurred when try to find container \"f317bc1c327db44f43a18a27b0675a1c7b80ef4d70c6571221192f7983b74575\": not found" Jul 15 11:35:09.381812 kubelet[1922]: I0715 11:35:09.381786 1922 scope.go:117] "RemoveContainer" containerID="16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392" Jul 15 11:35:09.383032 env[1204]: time="2025-07-15T11:35:09.383004733Z" level=info msg="RemoveContainer for \"16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392\"" Jul 15 11:35:09.386976 env[1204]: time="2025-07-15T11:35:09.386440097Z" level=info msg="RemoveContainer for \"16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392\" returns successfully" Jul 15 11:35:09.387065 kubelet[1922]: I0715 11:35:09.386637 1922 scope.go:117] "RemoveContainer" containerID="c409bfed3718aa5db98d481db774138b7b1ca85000d82abfe07cc0752f7f1b8d" Jul 15 11:35:09.387758 env[1204]: time="2025-07-15T11:35:09.387715347Z" level=info msg="RemoveContainer for \"c409bfed3718aa5db98d481db774138b7b1ca85000d82abfe07cc0752f7f1b8d\"" Jul 15 11:35:09.391719 env[1204]: time="2025-07-15T11:35:09.391652701Z" level=info msg="RemoveContainer for \"c409bfed3718aa5db98d481db774138b7b1ca85000d82abfe07cc0752f7f1b8d\" returns successfully" Jul 15 11:35:09.395137 kubelet[1922]: I0715 11:35:09.395103 1922 scope.go:117] "RemoveContainer" containerID="099a49f62d93cd37a007b5dd324d1d456933c7211410a07552d661968aa03835" Jul 15 11:35:09.399216 env[1204]: time="2025-07-15T11:35:09.399166067Z" level=info msg="RemoveContainer for \"099a49f62d93cd37a007b5dd324d1d456933c7211410a07552d661968aa03835\"" Jul 15 11:35:09.584502 env[1204]: time="2025-07-15T11:35:09.584450202Z" level=info msg="RemoveContainer for \"099a49f62d93cd37a007b5dd324d1d456933c7211410a07552d661968aa03835\" returns successfully" Jul 15 11:35:09.584784 kubelet[1922]: I0715 11:35:09.584751 1922 scope.go:117] "RemoveContainer" containerID="51019c878a895dc6e5ab7667f182933740be8ab8edb4eca0fe36e73d5d215054" Jul 15 11:35:09.586009 env[1204]: time="2025-07-15T11:35:09.585704693Z" level=info msg="RemoveContainer for \"51019c878a895dc6e5ab7667f182933740be8ab8edb4eca0fe36e73d5d215054\"" Jul 15 11:35:09.591574 env[1204]: time="2025-07-15T11:35:09.591506784Z" level=info msg="RemoveContainer for \"51019c878a895dc6e5ab7667f182933740be8ab8edb4eca0fe36e73d5d215054\" returns successfully" Jul 15 11:35:09.591829 kubelet[1922]: I0715 11:35:09.591775 1922 scope.go:117] "RemoveContainer" containerID="b847827ea6ca019ef32d9f30ee1b80fcd6eb7fc455bbe3b4cc331441bdc36721" Jul 15 11:35:09.593108 env[1204]: time="2025-07-15T11:35:09.593084263Z" level=info msg="RemoveContainer for \"b847827ea6ca019ef32d9f30ee1b80fcd6eb7fc455bbe3b4cc331441bdc36721\"" Jul 15 11:35:09.604705 env[1204]: time="2025-07-15T11:35:09.604625686Z" level=info msg="RemoveContainer for \"b847827ea6ca019ef32d9f30ee1b80fcd6eb7fc455bbe3b4cc331441bdc36721\" returns successfully" Jul 15 11:35:09.605174 kubelet[1922]: I0715 11:35:09.605046 1922 scope.go:117] "RemoveContainer" containerID="16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392" Jul 15 11:35:09.605512 env[1204]: time="2025-07-15T11:35:09.605411870Z" level=error msg="ContainerStatus for \"16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392\": not found" Jul 15 11:35:09.605722 kubelet[1922]: E0715 11:35:09.605683 1922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392\": not found" containerID="16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392" Jul 15 11:35:09.605811 kubelet[1922]: I0715 11:35:09.605729 1922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392"} err="failed to get container status \"16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392\": rpc error: code = NotFound desc = an error occurred when try to find container \"16b4fc0dbe296dadf13e6dddf33edcc62ea4e450075ea4759685a80418604392\": not found" Jul 15 11:35:09.605811 kubelet[1922]: I0715 11:35:09.605753 1922 scope.go:117] "RemoveContainer" containerID="c409bfed3718aa5db98d481db774138b7b1ca85000d82abfe07cc0752f7f1b8d" Jul 15 11:35:09.606040 env[1204]: time="2025-07-15T11:35:09.605960891Z" level=error msg="ContainerStatus for \"c409bfed3718aa5db98d481db774138b7b1ca85000d82abfe07cc0752f7f1b8d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c409bfed3718aa5db98d481db774138b7b1ca85000d82abfe07cc0752f7f1b8d\": not found" Jul 15 11:35:09.606291 kubelet[1922]: E0715 11:35:09.606234 1922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c409bfed3718aa5db98d481db774138b7b1ca85000d82abfe07cc0752f7f1b8d\": not found" containerID="c409bfed3718aa5db98d481db774138b7b1ca85000d82abfe07cc0752f7f1b8d" Jul 15 11:35:09.606368 kubelet[1922]: I0715 11:35:09.606306 1922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c409bfed3718aa5db98d481db774138b7b1ca85000d82abfe07cc0752f7f1b8d"} err="failed to get container status \"c409bfed3718aa5db98d481db774138b7b1ca85000d82abfe07cc0752f7f1b8d\": rpc error: code = NotFound desc = an error occurred when try to find container \"c409bfed3718aa5db98d481db774138b7b1ca85000d82abfe07cc0752f7f1b8d\": not found" Jul 15 11:35:09.606368 kubelet[1922]: I0715 11:35:09.606338 1922 scope.go:117] "RemoveContainer" containerID="099a49f62d93cd37a007b5dd324d1d456933c7211410a07552d661968aa03835" Jul 15 11:35:09.606746 env[1204]: time="2025-07-15T11:35:09.606653877Z" level=error msg="ContainerStatus for \"099a49f62d93cd37a007b5dd324d1d456933c7211410a07552d661968aa03835\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"099a49f62d93cd37a007b5dd324d1d456933c7211410a07552d661968aa03835\": not found" Jul 15 11:35:09.606861 kubelet[1922]: E0715 11:35:09.606836 1922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"099a49f62d93cd37a007b5dd324d1d456933c7211410a07552d661968aa03835\": not found" containerID="099a49f62d93cd37a007b5dd324d1d456933c7211410a07552d661968aa03835" Jul 15 11:35:09.606928 kubelet[1922]: I0715 11:35:09.606859 1922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"099a49f62d93cd37a007b5dd324d1d456933c7211410a07552d661968aa03835"} err="failed to get container status \"099a49f62d93cd37a007b5dd324d1d456933c7211410a07552d661968aa03835\": rpc error: code = NotFound desc = an error occurred when try to find container \"099a49f62d93cd37a007b5dd324d1d456933c7211410a07552d661968aa03835\": not found" Jul 15 11:35:09.606928 kubelet[1922]: I0715 11:35:09.606873 1922 scope.go:117] "RemoveContainer" containerID="51019c878a895dc6e5ab7667f182933740be8ab8edb4eca0fe36e73d5d215054" Jul 15 11:35:09.607132 env[1204]: time="2025-07-15T11:35:09.607078139Z" level=error msg="ContainerStatus for \"51019c878a895dc6e5ab7667f182933740be8ab8edb4eca0fe36e73d5d215054\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51019c878a895dc6e5ab7667f182933740be8ab8edb4eca0fe36e73d5d215054\": not found" Jul 15 11:35:09.607295 kubelet[1922]: E0715 11:35:09.607218 1922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51019c878a895dc6e5ab7667f182933740be8ab8edb4eca0fe36e73d5d215054\": not found" containerID="51019c878a895dc6e5ab7667f182933740be8ab8edb4eca0fe36e73d5d215054" Jul 15 11:35:09.607295 kubelet[1922]: I0715 11:35:09.607243 1922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51019c878a895dc6e5ab7667f182933740be8ab8edb4eca0fe36e73d5d215054"} err="failed to get container status \"51019c878a895dc6e5ab7667f182933740be8ab8edb4eca0fe36e73d5d215054\": rpc error: code = NotFound desc = an error occurred when try to find container \"51019c878a895dc6e5ab7667f182933740be8ab8edb4eca0fe36e73d5d215054\": not found" Jul 15 11:35:09.607295 kubelet[1922]: I0715 11:35:09.607258 1922 scope.go:117] "RemoveContainer" containerID="b847827ea6ca019ef32d9f30ee1b80fcd6eb7fc455bbe3b4cc331441bdc36721" Jul 15 11:35:09.607496 env[1204]: time="2025-07-15T11:35:09.607436975Z" level=error msg="ContainerStatus for \"b847827ea6ca019ef32d9f30ee1b80fcd6eb7fc455bbe3b4cc331441bdc36721\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b847827ea6ca019ef32d9f30ee1b80fcd6eb7fc455bbe3b4cc331441bdc36721\": not found" Jul 15 11:35:09.607582 kubelet[1922]: E0715 11:35:09.607552 1922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b847827ea6ca019ef32d9f30ee1b80fcd6eb7fc455bbe3b4cc331441bdc36721\": not found" containerID="b847827ea6ca019ef32d9f30ee1b80fcd6eb7fc455bbe3b4cc331441bdc36721" Jul 15 11:35:09.607582 kubelet[1922]: I0715 11:35:09.607570 1922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b847827ea6ca019ef32d9f30ee1b80fcd6eb7fc455bbe3b4cc331441bdc36721"} err="failed to get container status \"b847827ea6ca019ef32d9f30ee1b80fcd6eb7fc455bbe3b4cc331441bdc36721\": rpc error: code = NotFound desc = an error occurred when try to find container \"b847827ea6ca019ef32d9f30ee1b80fcd6eb7fc455bbe3b4cc331441bdc36721\": not found" Jul 15 11:35:10.040460 kubelet[1922]: E0715 11:35:10.040401 1922 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 11:35:10.382169 sshd[3566]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:10.385157 systemd[1]: sshd@23-10.0.0.91:22-10.0.0.1:51276.service: Deactivated successfully. Jul 15 11:35:10.385686 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 11:35:10.386224 systemd-logind[1189]: Session 24 logged out. Waiting for processes to exit. Jul 15 11:35:10.387183 systemd[1]: Started sshd@24-10.0.0.91:22-10.0.0.1:55044.service. Jul 15 11:35:10.387773 systemd-logind[1189]: Removed session 24. Jul 15 11:35:10.418045 sshd[3723]: Accepted publickey for core from 10.0.0.1 port 55044 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:10.419035 sshd[3723]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:10.423265 systemd-logind[1189]: New session 25 of user core. Jul 15 11:35:10.424110 systemd[1]: Started session-25.scope. Jul 15 11:35:11.005315 kubelet[1922]: I0715 11:35:11.005279 1922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c3f01ce-9388-41a4-9a1d-b6b5740073e9" path="/var/lib/kubelet/pods/8c3f01ce-9388-41a4-9a1d-b6b5740073e9/volumes" Jul 15 11:35:11.005994 kubelet[1922]: I0715 11:35:11.005957 1922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e12f1a0d-794c-4eee-bb98-18645a855876" path="/var/lib/kubelet/pods/e12f1a0d-794c-4eee-bb98-18645a855876/volumes" Jul 15 11:35:11.580037 sshd[3723]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:11.582767 systemd[1]: sshd@24-10.0.0.91:22-10.0.0.1:55044.service: Deactivated successfully. Jul 15 11:35:11.583316 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 11:35:11.585273 systemd[1]: Started sshd@25-10.0.0.91:22-10.0.0.1:55046.service. Jul 15 11:35:11.587664 systemd-logind[1189]: Session 25 logged out. Waiting for processes to exit. Jul 15 11:35:11.599105 systemd-logind[1189]: Removed session 25. Jul 15 11:35:11.633563 sshd[3735]: Accepted publickey for core from 10.0.0.1 port 55046 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:11.634775 sshd[3735]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:11.639256 systemd[1]: Started session-26.scope. Jul 15 11:35:11.640525 systemd-logind[1189]: New session 26 of user core. Jul 15 11:35:11.683866 systemd[1]: Created slice kubepods-burstable-pod6cddc048_f561_49ad_8882_3cf9a64effc1.slice. Jul 15 11:35:11.775376 sshd[3735]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:11.778357 systemd[1]: sshd@25-10.0.0.91:22-10.0.0.1:55046.service: Deactivated successfully. Jul 15 11:35:11.778916 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 11:35:11.779921 systemd-logind[1189]: Session 26 logged out. Waiting for processes to exit. Jul 15 11:35:11.781423 systemd[1]: Started sshd@26-10.0.0.91:22-10.0.0.1:55056.service. Jul 15 11:35:11.782125 systemd-logind[1189]: Removed session 26. Jul 15 11:35:11.787725 kubelet[1922]: E0715 11:35:11.787401 1922 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-fsxkd lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-qxkgs" podUID="6cddc048-f561-49ad-8882-3cf9a64effc1" Jul 15 11:35:11.810947 sshd[3750]: Accepted publickey for core from 10.0.0.1 port 55056 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:11.812046 sshd[3750]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:11.816050 systemd-logind[1189]: New session 27 of user core. Jul 15 11:35:11.816915 systemd[1]: Started session-27.scope. Jul 15 11:35:11.830736 kubelet[1922]: I0715 11:35:11.830631 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-bpf-maps\") pod \"cilium-qxkgs\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " pod="kube-system/cilium-qxkgs" Jul 15 11:35:11.830736 kubelet[1922]: I0715 11:35:11.830666 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-cilium-cgroup\") pod \"cilium-qxkgs\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " pod="kube-system/cilium-qxkgs" Jul 15 11:35:11.830736 kubelet[1922]: I0715 11:35:11.830686 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-cilium-run\") pod \"cilium-qxkgs\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " pod="kube-system/cilium-qxkgs" Jul 15 11:35:11.830736 kubelet[1922]: I0715 11:35:11.830700 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-hostproc\") pod \"cilium-qxkgs\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " pod="kube-system/cilium-qxkgs" Jul 15 11:35:11.830736 kubelet[1922]: I0715 11:35:11.830717 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cddc048-f561-49ad-8882-3cf9a64effc1-cilium-config-path\") pod \"cilium-qxkgs\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " pod="kube-system/cilium-qxkgs" Jul 15 11:35:11.830969 kubelet[1922]: I0715 11:35:11.830760 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-host-proc-sys-net\") pod \"cilium-qxkgs\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " pod="kube-system/cilium-qxkgs" Jul 15 11:35:11.830969 kubelet[1922]: I0715 11:35:11.830788 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-etc-cni-netd\") pod \"cilium-qxkgs\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " pod="kube-system/cilium-qxkgs" Jul 15 11:35:11.830969 kubelet[1922]: I0715 11:35:11.830801 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-xtables-lock\") pod \"cilium-qxkgs\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " pod="kube-system/cilium-qxkgs" Jul 15 11:35:11.830969 kubelet[1922]: I0715 11:35:11.830813 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6cddc048-f561-49ad-8882-3cf9a64effc1-clustermesh-secrets\") pod \"cilium-qxkgs\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " pod="kube-system/cilium-qxkgs" Jul 15 11:35:11.830969 kubelet[1922]: I0715 11:35:11.830826 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-host-proc-sys-kernel\") pod \"cilium-qxkgs\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " pod="kube-system/cilium-qxkgs" Jul 15 11:35:11.831086 kubelet[1922]: I0715 11:35:11.830838 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsxkd\" (UniqueName: \"kubernetes.io/projected/6cddc048-f561-49ad-8882-3cf9a64effc1-kube-api-access-fsxkd\") pod \"cilium-qxkgs\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " pod="kube-system/cilium-qxkgs" Jul 15 11:35:11.831086 kubelet[1922]: I0715 11:35:11.830852 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-lib-modules\") pod \"cilium-qxkgs\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " pod="kube-system/cilium-qxkgs" Jul 15 11:35:11.831086 kubelet[1922]: I0715 11:35:11.830867 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6cddc048-f561-49ad-8882-3cf9a64effc1-cilium-ipsec-secrets\") pod \"cilium-qxkgs\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " pod="kube-system/cilium-qxkgs" Jul 15 11:35:11.831086 kubelet[1922]: I0715 11:35:11.830954 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-cni-path\") pod \"cilium-qxkgs\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " pod="kube-system/cilium-qxkgs" Jul 15 11:35:11.831086 kubelet[1922]: I0715 11:35:11.830999 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6cddc048-f561-49ad-8882-3cf9a64effc1-hubble-tls\") pod \"cilium-qxkgs\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " pod="kube-system/cilium-qxkgs" Jul 15 11:35:12.535076 kubelet[1922]: I0715 11:35:12.535012 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6cddc048-f561-49ad-8882-3cf9a64effc1-cilium-ipsec-secrets\") pod \"6cddc048-f561-49ad-8882-3cf9a64effc1\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " Jul 15 11:35:12.535076 kubelet[1922]: I0715 11:35:12.535064 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-bpf-maps\") pod \"6cddc048-f561-49ad-8882-3cf9a64effc1\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " Jul 15 11:35:12.535076 kubelet[1922]: I0715 11:35:12.535078 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-cni-path\") pod \"6cddc048-f561-49ad-8882-3cf9a64effc1\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " Jul 15 11:35:12.535076 kubelet[1922]: I0715 11:35:12.535091 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-cilium-run\") pod \"6cddc048-f561-49ad-8882-3cf9a64effc1\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " Jul 15 11:35:12.535353 kubelet[1922]: I0715 11:35:12.535102 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-etc-cni-netd\") pod \"6cddc048-f561-49ad-8882-3cf9a64effc1\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " Jul 15 11:35:12.535353 kubelet[1922]: I0715 11:35:12.535115 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-host-proc-sys-kernel\") pod \"6cddc048-f561-49ad-8882-3cf9a64effc1\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " Jul 15 11:35:12.535353 kubelet[1922]: I0715 11:35:12.535152 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6cddc048-f561-49ad-8882-3cf9a64effc1" (UID: "6cddc048-f561-49ad-8882-3cf9a64effc1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:35:12.535353 kubelet[1922]: I0715 11:35:12.535151 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-xtables-lock\") pod \"6cddc048-f561-49ad-8882-3cf9a64effc1\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " Jul 15 11:35:12.535353 kubelet[1922]: I0715 11:35:12.535182 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6cddc048-f561-49ad-8882-3cf9a64effc1" (UID: "6cddc048-f561-49ad-8882-3cf9a64effc1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:35:12.535520 kubelet[1922]: I0715 11:35:12.535191 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-cni-path" (OuterVolumeSpecName: "cni-path") pod "6cddc048-f561-49ad-8882-3cf9a64effc1" (UID: "6cddc048-f561-49ad-8882-3cf9a64effc1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:35:12.535520 kubelet[1922]: I0715 11:35:12.535200 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6cddc048-f561-49ad-8882-3cf9a64effc1" (UID: "6cddc048-f561-49ad-8882-3cf9a64effc1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:35:12.535520 kubelet[1922]: I0715 11:35:12.535216 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6cddc048-f561-49ad-8882-3cf9a64effc1" (UID: "6cddc048-f561-49ad-8882-3cf9a64effc1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:35:12.535520 kubelet[1922]: I0715 11:35:12.535227 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6cddc048-f561-49ad-8882-3cf9a64effc1" (UID: "6cddc048-f561-49ad-8882-3cf9a64effc1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:35:12.535520 kubelet[1922]: I0715 11:35:12.535248 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6cddc048-f561-49ad-8882-3cf9a64effc1-hubble-tls\") pod \"6cddc048-f561-49ad-8882-3cf9a64effc1\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " Jul 15 11:35:12.535859 kubelet[1922]: I0715 11:35:12.535269 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-lib-modules\") pod \"6cddc048-f561-49ad-8882-3cf9a64effc1\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " Jul 15 11:35:12.535859 kubelet[1922]: I0715 11:35:12.535287 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6cddc048-f561-49ad-8882-3cf9a64effc1-clustermesh-secrets\") pod \"6cddc048-f561-49ad-8882-3cf9a64effc1\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " Jul 15 11:35:12.535859 kubelet[1922]: I0715 11:35:12.535324 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cddc048-f561-49ad-8882-3cf9a64effc1-cilium-config-path\") pod \"6cddc048-f561-49ad-8882-3cf9a64effc1\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " Jul 15 11:35:12.535859 kubelet[1922]: I0715 11:35:12.535340 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-host-proc-sys-net\") pod \"6cddc048-f561-49ad-8882-3cf9a64effc1\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " Jul 15 11:35:12.535859 kubelet[1922]: I0715 11:35:12.535359 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-cilium-cgroup\") pod \"6cddc048-f561-49ad-8882-3cf9a64effc1\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " Jul 15 11:35:12.535859 kubelet[1922]: I0715 11:35:12.535390 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-hostproc\") pod \"6cddc048-f561-49ad-8882-3cf9a64effc1\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " Jul 15 11:35:12.536090 kubelet[1922]: I0715 11:35:12.535413 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsxkd\" (UniqueName: \"kubernetes.io/projected/6cddc048-f561-49ad-8882-3cf9a64effc1-kube-api-access-fsxkd\") pod \"6cddc048-f561-49ad-8882-3cf9a64effc1\" (UID: \"6cddc048-f561-49ad-8882-3cf9a64effc1\") " Jul 15 11:35:12.536090 kubelet[1922]: I0715 11:35:12.535446 1922 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:12.536090 kubelet[1922]: I0715 11:35:12.535457 1922 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:12.536090 kubelet[1922]: I0715 11:35:12.535468 1922 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:12.536090 kubelet[1922]: I0715 11:35:12.535480 1922 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:12.536090 kubelet[1922]: I0715 11:35:12.535491 1922 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:12.536090 kubelet[1922]: I0715 11:35:12.535500 1922 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:12.536284 kubelet[1922]: I0715 11:35:12.536194 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6cddc048-f561-49ad-8882-3cf9a64effc1" (UID: "6cddc048-f561-49ad-8882-3cf9a64effc1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:35:12.536284 kubelet[1922]: I0715 11:35:12.536256 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6cddc048-f561-49ad-8882-3cf9a64effc1" (UID: "6cddc048-f561-49ad-8882-3cf9a64effc1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:35:12.536284 kubelet[1922]: I0715 11:35:12.536277 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-hostproc" (OuterVolumeSpecName: "hostproc") pod "6cddc048-f561-49ad-8882-3cf9a64effc1" (UID: "6cddc048-f561-49ad-8882-3cf9a64effc1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:35:12.536372 kubelet[1922]: I0715 11:35:12.536291 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6cddc048-f561-49ad-8882-3cf9a64effc1" (UID: "6cddc048-f561-49ad-8882-3cf9a64effc1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:35:12.537332 kubelet[1922]: I0715 11:35:12.537297 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cddc048-f561-49ad-8882-3cf9a64effc1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6cddc048-f561-49ad-8882-3cf9a64effc1" (UID: "6cddc048-f561-49ad-8882-3cf9a64effc1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 11:35:12.538669 kubelet[1922]: I0715 11:35:12.538650 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cddc048-f561-49ad-8882-3cf9a64effc1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6cddc048-f561-49ad-8882-3cf9a64effc1" (UID: "6cddc048-f561-49ad-8882-3cf9a64effc1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 11:35:12.539255 systemd[1]: var-lib-kubelet-pods-6cddc048\x2df561\x2d49ad\x2d8882\x2d3cf9a64effc1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfsxkd.mount: Deactivated successfully. Jul 15 11:35:12.539646 kubelet[1922]: I0715 11:35:12.539461 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cddc048-f561-49ad-8882-3cf9a64effc1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6cddc048-f561-49ad-8882-3cf9a64effc1" (UID: "6cddc048-f561-49ad-8882-3cf9a64effc1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 11:35:12.539986 kubelet[1922]: I0715 11:35:12.539937 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cddc048-f561-49ad-8882-3cf9a64effc1-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "6cddc048-f561-49ad-8882-3cf9a64effc1" (UID: "6cddc048-f561-49ad-8882-3cf9a64effc1"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 11:35:12.540490 kubelet[1922]: I0715 11:35:12.540457 1922 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cddc048-f561-49ad-8882-3cf9a64effc1-kube-api-access-fsxkd" (OuterVolumeSpecName: "kube-api-access-fsxkd") pod "6cddc048-f561-49ad-8882-3cf9a64effc1" (UID: "6cddc048-f561-49ad-8882-3cf9a64effc1"). InnerVolumeSpecName "kube-api-access-fsxkd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 11:35:12.541055 systemd[1]: var-lib-kubelet-pods-6cddc048\x2df561\x2d49ad\x2d8882\x2d3cf9a64effc1-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 15 11:35:12.541127 systemd[1]: var-lib-kubelet-pods-6cddc048\x2df561\x2d49ad\x2d8882\x2d3cf9a64effc1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 11:35:12.541183 systemd[1]: var-lib-kubelet-pods-6cddc048\x2df561\x2d49ad\x2d8882\x2d3cf9a64effc1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 11:35:12.635905 kubelet[1922]: I0715 11:35:12.635855 1922 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:12.635905 kubelet[1922]: I0715 11:35:12.635903 1922 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6cddc048-f561-49ad-8882-3cf9a64effc1-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:12.635905 kubelet[1922]: I0715 11:35:12.635912 1922 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cddc048-f561-49ad-8882-3cf9a64effc1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:12.636140 kubelet[1922]: I0715 11:35:12.635919 1922 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:12.636140 kubelet[1922]: I0715 11:35:12.635926 1922 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:12.636140 kubelet[1922]: I0715 11:35:12.635935 1922 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6cddc048-f561-49ad-8882-3cf9a64effc1-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:12.636140 kubelet[1922]: I0715 11:35:12.635944 1922 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fsxkd\" (UniqueName: \"kubernetes.io/projected/6cddc048-f561-49ad-8882-3cf9a64effc1-kube-api-access-fsxkd\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:12.636140 kubelet[1922]: I0715 11:35:12.635950 1922 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6cddc048-f561-49ad-8882-3cf9a64effc1-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:12.636140 kubelet[1922]: I0715 11:35:12.635958 1922 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6cddc048-f561-49ad-8882-3cf9a64effc1-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:13.008249 systemd[1]: Removed slice kubepods-burstable-pod6cddc048_f561_49ad_8882_3cf9a64effc1.slice. Jul 15 11:35:13.424836 systemd[1]: Created slice kubepods-burstable-pod9d063662_a9ab_44a2_9d0a_c6b5e9e9921b.slice. Jul 15 11:35:13.540946 kubelet[1922]: I0715 11:35:13.540900 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d063662-a9ab-44a2-9d0a-c6b5e9e9921b-bpf-maps\") pod \"cilium-xvh5j\" (UID: \"9d063662-a9ab-44a2-9d0a-c6b5e9e9921b\") " pod="kube-system/cilium-xvh5j" Jul 15 11:35:13.540946 kubelet[1922]: I0715 11:35:13.540929 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d063662-a9ab-44a2-9d0a-c6b5e9e9921b-cni-path\") pod \"cilium-xvh5j\" (UID: \"9d063662-a9ab-44a2-9d0a-c6b5e9e9921b\") " pod="kube-system/cilium-xvh5j" Jul 15 11:35:13.540946 kubelet[1922]: I0715 11:35:13.540942 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d063662-a9ab-44a2-9d0a-c6b5e9e9921b-clustermesh-secrets\") pod \"cilium-xvh5j\" (UID: \"9d063662-a9ab-44a2-9d0a-c6b5e9e9921b\") " pod="kube-system/cilium-xvh5j" Jul 15 11:35:13.540946 kubelet[1922]: I0715 11:35:13.540956 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9d063662-a9ab-44a2-9d0a-c6b5e9e9921b-cilium-ipsec-secrets\") pod \"cilium-xvh5j\" (UID: \"9d063662-a9ab-44a2-9d0a-c6b5e9e9921b\") " pod="kube-system/cilium-xvh5j" Jul 15 11:35:13.541417 kubelet[1922]: I0715 11:35:13.540972 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d063662-a9ab-44a2-9d0a-c6b5e9e9921b-etc-cni-netd\") pod \"cilium-xvh5j\" (UID: \"9d063662-a9ab-44a2-9d0a-c6b5e9e9921b\") " pod="kube-system/cilium-xvh5j" Jul 15 11:35:13.541417 kubelet[1922]: I0715 11:35:13.540997 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d063662-a9ab-44a2-9d0a-c6b5e9e9921b-lib-modules\") pod \"cilium-xvh5j\" (UID: \"9d063662-a9ab-44a2-9d0a-c6b5e9e9921b\") " pod="kube-system/cilium-xvh5j" Jul 15 11:35:13.541417 kubelet[1922]: I0715 11:35:13.541020 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d063662-a9ab-44a2-9d0a-c6b5e9e9921b-xtables-lock\") pod \"cilium-xvh5j\" (UID: \"9d063662-a9ab-44a2-9d0a-c6b5e9e9921b\") " pod="kube-system/cilium-xvh5j" Jul 15 11:35:13.541417 kubelet[1922]: I0715 11:35:13.541057 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d063662-a9ab-44a2-9d0a-c6b5e9e9921b-host-proc-sys-net\") pod \"cilium-xvh5j\" (UID: \"9d063662-a9ab-44a2-9d0a-c6b5e9e9921b\") " pod="kube-system/cilium-xvh5j" Jul 15 11:35:13.541417 kubelet[1922]: I0715 11:35:13.541088 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d063662-a9ab-44a2-9d0a-c6b5e9e9921b-cilium-cgroup\") pod \"cilium-xvh5j\" (UID: \"9d063662-a9ab-44a2-9d0a-c6b5e9e9921b\") " pod="kube-system/cilium-xvh5j" Jul 15 11:35:13.541417 kubelet[1922]: I0715 11:35:13.541103 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d063662-a9ab-44a2-9d0a-c6b5e9e9921b-cilium-config-path\") pod \"cilium-xvh5j\" (UID: \"9d063662-a9ab-44a2-9d0a-c6b5e9e9921b\") " pod="kube-system/cilium-xvh5j" Jul 15 11:35:13.541560 kubelet[1922]: I0715 11:35:13.541119 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d063662-a9ab-44a2-9d0a-c6b5e9e9921b-host-proc-sys-kernel\") pod \"cilium-xvh5j\" (UID: \"9d063662-a9ab-44a2-9d0a-c6b5e9e9921b\") " pod="kube-system/cilium-xvh5j" Jul 15 11:35:13.541560 kubelet[1922]: I0715 11:35:13.541133 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlgmx\" (UniqueName: \"kubernetes.io/projected/9d063662-a9ab-44a2-9d0a-c6b5e9e9921b-kube-api-access-dlgmx\") pod \"cilium-xvh5j\" (UID: \"9d063662-a9ab-44a2-9d0a-c6b5e9e9921b\") " pod="kube-system/cilium-xvh5j" Jul 15 11:35:13.541560 kubelet[1922]: I0715 11:35:13.541146 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d063662-a9ab-44a2-9d0a-c6b5e9e9921b-hubble-tls\") pod \"cilium-xvh5j\" (UID: \"9d063662-a9ab-44a2-9d0a-c6b5e9e9921b\") " pod="kube-system/cilium-xvh5j" Jul 15 11:35:13.541560 kubelet[1922]: I0715 11:35:13.541158 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d063662-a9ab-44a2-9d0a-c6b5e9e9921b-cilium-run\") pod \"cilium-xvh5j\" (UID: \"9d063662-a9ab-44a2-9d0a-c6b5e9e9921b\") " pod="kube-system/cilium-xvh5j" Jul 15 11:35:13.541560 kubelet[1922]: I0715 11:35:13.541177 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d063662-a9ab-44a2-9d0a-c6b5e9e9921b-hostproc\") pod \"cilium-xvh5j\" (UID: \"9d063662-a9ab-44a2-9d0a-c6b5e9e9921b\") " pod="kube-system/cilium-xvh5j" Jul 15 11:35:13.727691 kubelet[1922]: E0715 11:35:13.727562 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:13.728245 env[1204]: time="2025-07-15T11:35:13.728206679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xvh5j,Uid:9d063662-a9ab-44a2-9d0a-c6b5e9e9921b,Namespace:kube-system,Attempt:0,}" Jul 15 11:35:13.798072 env[1204]: time="2025-07-15T11:35:13.797991064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:35:13.798072 env[1204]: time="2025-07-15T11:35:13.798043884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:35:13.798072 env[1204]: time="2025-07-15T11:35:13.798059314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:35:13.798262 env[1204]: time="2025-07-15T11:35:13.798221304Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/39919510faf975e2070533978385fe995034f62c838537e623df412f4ba7a61b pid=3781 runtime=io.containerd.runc.v2 Jul 15 11:35:13.808695 systemd[1]: Started cri-containerd-39919510faf975e2070533978385fe995034f62c838537e623df412f4ba7a61b.scope. Jul 15 11:35:13.832788 env[1204]: time="2025-07-15T11:35:13.832743088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xvh5j,Uid:9d063662-a9ab-44a2-9d0a-c6b5e9e9921b,Namespace:kube-system,Attempt:0,} returns sandbox id \"39919510faf975e2070533978385fe995034f62c838537e623df412f4ba7a61b\"" Jul 15 11:35:13.833358 kubelet[1922]: E0715 11:35:13.833335 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:13.838264 env[1204]: time="2025-07-15T11:35:13.838208756Z" level=info msg="CreateContainer within sandbox \"39919510faf975e2070533978385fe995034f62c838537e623df412f4ba7a61b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 11:35:13.848927 env[1204]: time="2025-07-15T11:35:13.848895524Z" level=info msg="CreateContainer within sandbox \"39919510faf975e2070533978385fe995034f62c838537e623df412f4ba7a61b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"12c4cb0ca887f73f88b765a5c4865ee7cf8059875eab55c13476e7f617212763\"" Jul 15 11:35:13.849266 env[1204]: time="2025-07-15T11:35:13.849248198Z" level=info msg="StartContainer for \"12c4cb0ca887f73f88b765a5c4865ee7cf8059875eab55c13476e7f617212763\"" Jul 15 11:35:13.861956 systemd[1]: Started cri-containerd-12c4cb0ca887f73f88b765a5c4865ee7cf8059875eab55c13476e7f617212763.scope. Jul 15 11:35:13.882083 env[1204]: time="2025-07-15T11:35:13.882028757Z" level=info msg="StartContainer for \"12c4cb0ca887f73f88b765a5c4865ee7cf8059875eab55c13476e7f617212763\" returns successfully" Jul 15 11:35:13.888701 systemd[1]: cri-containerd-12c4cb0ca887f73f88b765a5c4865ee7cf8059875eab55c13476e7f617212763.scope: Deactivated successfully. Jul 15 11:35:13.913507 env[1204]: time="2025-07-15T11:35:13.913446515Z" level=info msg="shim disconnected" id=12c4cb0ca887f73f88b765a5c4865ee7cf8059875eab55c13476e7f617212763 Jul 15 11:35:13.913507 env[1204]: time="2025-07-15T11:35:13.913495679Z" level=warning msg="cleaning up after shim disconnected" id=12c4cb0ca887f73f88b765a5c4865ee7cf8059875eab55c13476e7f617212763 namespace=k8s.io Jul 15 11:35:13.913507 env[1204]: time="2025-07-15T11:35:13.913505828Z" level=info msg="cleaning up dead shim" Jul 15 11:35:13.919647 env[1204]: time="2025-07-15T11:35:13.919618522Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:35:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3864 runtime=io.containerd.runc.v2\n" Jul 15 11:35:14.389117 kubelet[1922]: E0715 11:35:14.388963 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:14.393798 env[1204]: time="2025-07-15T11:35:14.393755634Z" level=info msg="CreateContainer within sandbox \"39919510faf975e2070533978385fe995034f62c838537e623df412f4ba7a61b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 11:35:14.406460 env[1204]: time="2025-07-15T11:35:14.406404437Z" level=info msg="CreateContainer within sandbox \"39919510faf975e2070533978385fe995034f62c838537e623df412f4ba7a61b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"da883693bcf034f62666f139c33aeb767caebbda78a0573f68a80293cb331f53\"" Jul 15 11:35:14.407035 env[1204]: time="2025-07-15T11:35:14.407002560Z" level=info msg="StartContainer for \"da883693bcf034f62666f139c33aeb767caebbda78a0573f68a80293cb331f53\"" Jul 15 11:35:14.420469 systemd[1]: Started cri-containerd-da883693bcf034f62666f139c33aeb767caebbda78a0573f68a80293cb331f53.scope. Jul 15 11:35:14.443541 env[1204]: time="2025-07-15T11:35:14.443496719Z" level=info msg="StartContainer for \"da883693bcf034f62666f139c33aeb767caebbda78a0573f68a80293cb331f53\" returns successfully" Jul 15 11:35:14.449290 systemd[1]: cri-containerd-da883693bcf034f62666f139c33aeb767caebbda78a0573f68a80293cb331f53.scope: Deactivated successfully. Jul 15 11:35:14.469221 env[1204]: time="2025-07-15T11:35:14.469174293Z" level=info msg="shim disconnected" id=da883693bcf034f62666f139c33aeb767caebbda78a0573f68a80293cb331f53 Jul 15 11:35:14.469387 env[1204]: time="2025-07-15T11:35:14.469223486Z" level=warning msg="cleaning up after shim disconnected" id=da883693bcf034f62666f139c33aeb767caebbda78a0573f68a80293cb331f53 namespace=k8s.io Jul 15 11:35:14.469387 env[1204]: time="2025-07-15T11:35:14.469237113Z" level=info msg="cleaning up dead shim" Jul 15 11:35:14.475978 env[1204]: time="2025-07-15T11:35:14.475930429Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:35:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3924 runtime=io.containerd.runc.v2\n" Jul 15 11:35:15.005680 kubelet[1922]: I0715 11:35:15.005629 1922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cddc048-f561-49ad-8882-3cf9a64effc1" path="/var/lib/kubelet/pods/6cddc048-f561-49ad-8882-3cf9a64effc1/volumes" Jul 15 11:35:15.041076 kubelet[1922]: E0715 11:35:15.041027 1922 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 11:35:15.392512 kubelet[1922]: E0715 11:35:15.392270 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:15.397122 env[1204]: time="2025-07-15T11:35:15.397069064Z" level=info msg="CreateContainer within sandbox \"39919510faf975e2070533978385fe995034f62c838537e623df412f4ba7a61b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 11:35:15.427822 env[1204]: time="2025-07-15T11:35:15.427749771Z" level=info msg="CreateContainer within sandbox \"39919510faf975e2070533978385fe995034f62c838537e623df412f4ba7a61b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8b7f301449b2f223a4008e3e577a1af27090e0fb6a2a47e6900a017636897a84\"" Jul 15 11:35:15.428582 env[1204]: time="2025-07-15T11:35:15.428522655Z" level=info msg="StartContainer for \"8b7f301449b2f223a4008e3e577a1af27090e0fb6a2a47e6900a017636897a84\"" Jul 15 11:35:15.455105 systemd[1]: Started cri-containerd-8b7f301449b2f223a4008e3e577a1af27090e0fb6a2a47e6900a017636897a84.scope. Jul 15 11:35:15.483228 systemd[1]: cri-containerd-8b7f301449b2f223a4008e3e577a1af27090e0fb6a2a47e6900a017636897a84.scope: Deactivated successfully. Jul 15 11:35:15.485707 env[1204]: time="2025-07-15T11:35:15.485662987Z" level=info msg="StartContainer for \"8b7f301449b2f223a4008e3e577a1af27090e0fb6a2a47e6900a017636897a84\" returns successfully" Jul 15 11:35:15.508680 env[1204]: time="2025-07-15T11:35:15.508627121Z" level=info msg="shim disconnected" id=8b7f301449b2f223a4008e3e577a1af27090e0fb6a2a47e6900a017636897a84 Jul 15 11:35:15.508680 env[1204]: time="2025-07-15T11:35:15.508671697Z" level=warning msg="cleaning up after shim disconnected" id=8b7f301449b2f223a4008e3e577a1af27090e0fb6a2a47e6900a017636897a84 namespace=k8s.io Jul 15 11:35:15.508680 env[1204]: time="2025-07-15T11:35:15.508680403Z" level=info msg="cleaning up dead shim" Jul 15 11:35:15.514816 env[1204]: time="2025-07-15T11:35:15.514761977Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:35:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3980 runtime=io.containerd.runc.v2\n" Jul 15 11:35:15.650998 systemd[1]: run-containerd-runc-k8s.io-8b7f301449b2f223a4008e3e577a1af27090e0fb6a2a47e6900a017636897a84-runc.0emtCp.mount: Deactivated successfully. Jul 15 11:35:15.651134 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b7f301449b2f223a4008e3e577a1af27090e0fb6a2a47e6900a017636897a84-rootfs.mount: Deactivated successfully. Jul 15 11:35:16.395032 kubelet[1922]: E0715 11:35:16.395004 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:16.593000 env[1204]: time="2025-07-15T11:35:16.592944334Z" level=info msg="CreateContainer within sandbox \"39919510faf975e2070533978385fe995034f62c838537e623df412f4ba7a61b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 11:35:16.779552 env[1204]: time="2025-07-15T11:35:16.779409329Z" level=info msg="CreateContainer within sandbox \"39919510faf975e2070533978385fe995034f62c838537e623df412f4ba7a61b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a061d173b11d4ba78052f7862d60698bb8eac472e6ad760a95c08cbd74c0fa2f\"" Jul 15 11:35:16.780124 env[1204]: time="2025-07-15T11:35:16.780029562Z" level=info msg="StartContainer for \"a061d173b11d4ba78052f7862d60698bb8eac472e6ad760a95c08cbd74c0fa2f\"" Jul 15 11:35:16.799655 systemd[1]: Started cri-containerd-a061d173b11d4ba78052f7862d60698bb8eac472e6ad760a95c08cbd74c0fa2f.scope. Jul 15 11:35:16.824977 systemd[1]: cri-containerd-a061d173b11d4ba78052f7862d60698bb8eac472e6ad760a95c08cbd74c0fa2f.scope: Deactivated successfully. Jul 15 11:35:16.825408 env[1204]: time="2025-07-15T11:35:16.825375032Z" level=info msg="StartContainer for \"a061d173b11d4ba78052f7862d60698bb8eac472e6ad760a95c08cbd74c0fa2f\" returns successfully" Jul 15 11:35:16.851766 env[1204]: time="2025-07-15T11:35:16.851715197Z" level=info msg="shim disconnected" id=a061d173b11d4ba78052f7862d60698bb8eac472e6ad760a95c08cbd74c0fa2f Jul 15 11:35:16.851766 env[1204]: time="2025-07-15T11:35:16.851761836Z" level=warning msg="cleaning up after shim disconnected" id=a061d173b11d4ba78052f7862d60698bb8eac472e6ad760a95c08cbd74c0fa2f namespace=k8s.io Jul 15 11:35:16.851766 env[1204]: time="2025-07-15T11:35:16.851772236Z" level=info msg="cleaning up dead shim" Jul 15 11:35:16.858930 env[1204]: time="2025-07-15T11:35:16.858894980Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:35:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4034 runtime=io.containerd.runc.v2\n" Jul 15 11:35:16.936552 kubelet[1922]: I0715 11:35:16.936485 1922 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-15T11:35:16Z","lastTransitionTime":"2025-07-15T11:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 15 11:35:17.398508 kubelet[1922]: E0715 11:35:17.398465 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:17.418400 env[1204]: time="2025-07-15T11:35:17.418310368Z" level=info msg="CreateContainer within sandbox \"39919510faf975e2070533978385fe995034f62c838537e623df412f4ba7a61b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 11:35:17.434660 env[1204]: time="2025-07-15T11:35:17.434600373Z" level=info msg="CreateContainer within sandbox \"39919510faf975e2070533978385fe995034f62c838537e623df412f4ba7a61b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fe653d27b937c473faf7f774449f7f080c8ea337f588118749a06e2429aaa9c7\"" Jul 15 11:35:17.435178 env[1204]: time="2025-07-15T11:35:17.435089335Z" level=info msg="StartContainer for \"fe653d27b937c473faf7f774449f7f080c8ea337f588118749a06e2429aaa9c7\"" Jul 15 11:35:17.448244 systemd[1]: Started cri-containerd-fe653d27b937c473faf7f774449f7f080c8ea337f588118749a06e2429aaa9c7.scope. Jul 15 11:35:17.475036 env[1204]: time="2025-07-15T11:35:17.474990722Z" level=info msg="StartContainer for \"fe653d27b937c473faf7f774449f7f080c8ea337f588118749a06e2429aaa9c7\" returns successfully" Jul 15 11:35:17.736904 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 15 11:35:17.750239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a061d173b11d4ba78052f7862d60698bb8eac472e6ad760a95c08cbd74c0fa2f-rootfs.mount: Deactivated successfully. Jul 15 11:35:18.004275 kubelet[1922]: E0715 11:35:18.004157 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:18.403688 kubelet[1922]: E0715 11:35:18.403567 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:19.728500 kubelet[1922]: E0715 11:35:19.728460 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:20.274036 systemd[1]: run-containerd-runc-k8s.io-fe653d27b937c473faf7f774449f7f080c8ea337f588118749a06e2429aaa9c7-runc.4geLtn.mount: Deactivated successfully. Jul 15 11:35:20.290292 systemd-networkd[1033]: lxc_health: Link UP Jul 15 11:35:20.302931 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 15 11:35:20.303481 systemd-networkd[1033]: lxc_health: Gained carrier Jul 15 11:35:21.728958 kubelet[1922]: E0715 11:35:21.728928 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:21.922079 systemd-networkd[1033]: lxc_health: Gained IPv6LL Jul 15 11:35:22.014575 kubelet[1922]: I0715 11:35:22.014519 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xvh5j" podStartSLOduration=9.014498248 podStartE2EDuration="9.014498248s" podCreationTimestamp="2025-07-15 11:35:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:35:18.4200371 +0000 UTC m=+93.494318917" watchObservedRunningTime="2025-07-15 11:35:22.014498248 +0000 UTC m=+97.088780045" Jul 15 11:35:22.410856 kubelet[1922]: E0715 11:35:22.410521 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:23.412451 kubelet[1922]: E0715 11:35:23.412418 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:25.003515 kubelet[1922]: E0715 11:35:25.003472 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:26.680051 sshd[3750]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:26.682017 systemd[1]: sshd@26-10.0.0.91:22-10.0.0.1:55056.service: Deactivated successfully. Jul 15 11:35:26.682602 systemd[1]: session-27.scope: Deactivated successfully. Jul 15 11:35:26.683070 systemd-logind[1189]: Session 27 logged out. Waiting for processes to exit. Jul 15 11:35:26.683725 systemd-logind[1189]: Removed session 27.