Aug 13 00:50:56.045144 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 00:50:56.045185 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:50:56.045205 kernel: BIOS-provided physical RAM map: Aug 13 00:50:56.045212 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 00:50:56.045219 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 00:50:56.045226 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 00:50:56.045234 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Aug 13 00:50:56.045244 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Aug 13 00:50:56.045257 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 00:50:56.045266 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 00:50:56.045290 kernel: NX (Execute Disable) protection: active Aug 13 00:50:56.045301 kernel: SMBIOS 2.8 present. Aug 13 00:50:56.045311 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Aug 13 00:50:56.048373 kernel: Hypervisor detected: KVM Aug 13 00:50:56.048419 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:50:56.048448 kernel: kvm-clock: cpu 0, msr 3f19e001, primary cpu clock Aug 13 00:50:56.048461 kernel: kvm-clock: using sched offset of 3939609603 cycles Aug 13 00:50:56.048475 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:50:56.048498 kernel: tsc: Detected 2494.140 MHz processor Aug 13 00:50:56.048511 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:50:56.048525 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:50:56.048539 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Aug 13 00:50:56.048552 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:50:56.048571 kernel: ACPI: Early table checksum verification disabled Aug 13 00:50:56.048585 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Aug 13 00:50:56.048598 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:50:56.048611 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:50:56.048625 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:50:56.048638 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 00:50:56.048651 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:50:56.048664 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:50:56.048678 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:50:56.048697 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:50:56.048711 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Aug 13 00:50:56.048724 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Aug 13 00:50:56.048737 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 00:50:56.048749 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Aug 13 00:50:56.048762 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Aug 13 00:50:56.048776 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Aug 13 00:50:56.048789 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Aug 13 00:50:56.048816 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 00:50:56.048830 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 00:50:56.048843 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 00:50:56.048858 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 00:50:56.048873 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Aug 13 00:50:56.048888 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Aug 13 00:50:56.048908 kernel: Zone ranges: Aug 13 00:50:56.048923 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:50:56.048936 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Aug 13 00:50:56.048949 kernel: Normal empty Aug 13 00:50:56.048964 kernel: Movable zone start for each node Aug 13 00:50:56.048979 kernel: Early memory node ranges Aug 13 00:50:56.048993 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 00:50:56.049008 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Aug 13 00:50:56.049022 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Aug 13 00:50:56.049042 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:50:56.049063 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 00:50:56.049078 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Aug 13 00:50:56.049093 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:50:56.049108 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:50:56.049123 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:50:56.049137 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:50:56.049152 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:50:56.049168 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:50:56.049190 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:50:56.049210 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:50:56.049224 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:50:56.049239 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:50:56.049253 kernel: TSC deadline timer available Aug 13 00:50:56.049268 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 00:50:56.049316 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 13 00:50:56.049330 kernel: Booting paravirtualized kernel on KVM Aug 13 00:50:56.049345 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:50:56.049366 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:50:56.049382 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Aug 13 00:50:56.049398 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Aug 13 00:50:56.049412 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:50:56.049427 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Aug 13 00:50:56.049442 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 13 00:50:56.049457 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Aug 13 00:50:56.049473 kernel: Policy zone: DMA32 Aug 13 00:50:56.049490 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:50:56.049512 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:50:56.049526 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:50:56.049541 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 00:50:56.049555 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:50:56.049570 kernel: Memory: 1973276K/2096612K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 123076K reserved, 0K cma-reserved) Aug 13 00:50:56.049584 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:50:56.049598 kernel: Kernel/User page tables isolation: enabled Aug 13 00:50:56.049611 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 00:50:56.049628 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 00:50:56.049641 kernel: rcu: Hierarchical RCU implementation. Aug 13 00:50:56.049655 kernel: rcu: RCU event tracing is enabled. Aug 13 00:50:56.049668 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:50:56.049680 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:50:56.049694 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:50:56.049713 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:50:56.049726 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:50:56.049738 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 00:50:56.049757 kernel: random: crng init done Aug 13 00:50:56.049769 kernel: Console: colour VGA+ 80x25 Aug 13 00:50:56.049782 kernel: printk: console [tty0] enabled Aug 13 00:50:56.049793 kernel: printk: console [ttyS0] enabled Aug 13 00:50:56.049805 kernel: ACPI: Core revision 20210730 Aug 13 00:50:56.049817 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:50:56.049830 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:50:56.049842 kernel: x2apic enabled Aug 13 00:50:56.049855 kernel: Switched APIC routing to physical x2apic. Aug 13 00:50:56.049867 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:50:56.049886 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Aug 13 00:50:56.049901 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Aug 13 00:50:56.049938 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 13 00:50:56.049951 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 13 00:50:56.049964 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:50:56.049976 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:50:56.049988 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:50:56.049999 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 00:50:56.050018 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:50:56.050056 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Aug 13 00:50:56.050069 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 00:50:56.050087 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:50:56.050101 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 00:50:56.050117 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:50:56.050133 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:50:56.050146 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:50:56.050159 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:50:56.050174 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 00:50:56.050193 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:50:56.050205 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:50:56.050219 kernel: LSM: Security Framework initializing Aug 13 00:50:56.050232 kernel: SELinux: Initializing. Aug 13 00:50:56.050244 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 00:50:56.050258 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 00:50:56.050293 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Aug 13 00:50:56.050314 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Aug 13 00:50:56.050327 kernel: signal: max sigframe size: 1776 Aug 13 00:50:56.050340 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:50:56.050354 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 00:50:56.050368 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:50:56.050381 kernel: x86: Booting SMP configuration: Aug 13 00:50:56.050395 kernel: .... node #0, CPUs: #1 Aug 13 00:50:56.050404 kernel: kvm-clock: cpu 1, msr 3f19e041, secondary cpu clock Aug 13 00:50:56.050414 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Aug 13 00:50:56.050429 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:50:56.050439 kernel: smpboot: Max logical packages: 1 Aug 13 00:50:56.050449 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Aug 13 00:50:56.050458 kernel: devtmpfs: initialized Aug 13 00:50:56.050467 kernel: x86/mm: Memory block size: 128MB Aug 13 00:50:56.050477 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:50:56.050487 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:50:56.050496 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:50:56.050505 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:50:56.050518 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:50:56.050528 kernel: audit: type=2000 audit(1755046254.420:1): state=initialized audit_enabled=0 res=1 Aug 13 00:50:56.050537 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:50:56.050547 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:50:56.050557 kernel: cpuidle: using governor menu Aug 13 00:50:56.050566 kernel: ACPI: bus type PCI registered Aug 13 00:50:56.050581 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:50:56.050590 kernel: dca service started, version 1.12.1 Aug 13 00:50:56.050599 kernel: PCI: Using configuration type 1 for base access Aug 13 00:50:56.050614 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:50:56.050623 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:50:56.050632 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:50:56.050641 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:50:56.050651 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:50:56.050660 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 00:50:56.050670 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 00:50:56.050680 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 00:50:56.050689 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:50:56.050703 kernel: ACPI: Interpreter enabled Aug 13 00:50:56.050712 kernel: ACPI: PM: (supports S0 S5) Aug 13 00:50:56.050722 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:50:56.050732 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:50:56.050742 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 13 00:50:56.050751 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:50:56.051084 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:50:56.051195 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Aug 13 00:50:56.051214 kernel: acpiphp: Slot [3] registered Aug 13 00:50:56.051223 kernel: acpiphp: Slot [4] registered Aug 13 00:50:56.051233 kernel: acpiphp: Slot [5] registered Aug 13 00:50:56.051242 kernel: acpiphp: Slot [6] registered Aug 13 00:50:56.051252 kernel: acpiphp: Slot [7] registered Aug 13 00:50:56.051261 kernel: acpiphp: Slot [8] registered Aug 13 00:50:56.051281 kernel: acpiphp: Slot [9] registered Aug 13 00:50:56.051291 kernel: acpiphp: Slot [10] registered Aug 13 00:50:56.051300 kernel: acpiphp: Slot [11] registered Aug 13 00:50:56.051314 kernel: acpiphp: Slot [12] registered Aug 13 00:50:56.051323 kernel: acpiphp: Slot [13] registered Aug 13 00:50:56.051353 kernel: acpiphp: Slot [14] registered Aug 13 00:50:56.051363 kernel: acpiphp: Slot [15] registered Aug 13 00:50:56.051372 kernel: acpiphp: Slot [16] registered Aug 13 00:50:56.051382 kernel: acpiphp: Slot [17] registered Aug 13 00:50:56.051391 kernel: acpiphp: Slot [18] registered Aug 13 00:50:56.051401 kernel: acpiphp: Slot [19] registered Aug 13 00:50:56.051410 kernel: acpiphp: Slot [20] registered Aug 13 00:50:56.051423 kernel: acpiphp: Slot [21] registered Aug 13 00:50:56.051433 kernel: acpiphp: Slot [22] registered Aug 13 00:50:56.051442 kernel: acpiphp: Slot [23] registered Aug 13 00:50:56.051451 kernel: acpiphp: Slot [24] registered Aug 13 00:50:56.051473 kernel: acpiphp: Slot [25] registered Aug 13 00:50:56.051482 kernel: acpiphp: Slot [26] registered Aug 13 00:50:56.051508 kernel: acpiphp: Slot [27] registered Aug 13 00:50:56.051517 kernel: acpiphp: Slot [28] registered Aug 13 00:50:56.051526 kernel: acpiphp: Slot [29] registered Aug 13 00:50:56.051536 kernel: acpiphp: Slot [30] registered Aug 13 00:50:56.051548 kernel: acpiphp: Slot [31] registered Aug 13 00:50:56.051557 kernel: PCI host bridge to bus 0000:00 Aug 13 00:50:56.051712 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:50:56.051827 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:50:56.051956 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:50:56.052046 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 13 00:50:56.052139 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 13 00:50:56.052232 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:50:56.052401 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 00:50:56.052515 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 13 00:50:56.057395 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 13 00:50:56.057602 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Aug 13 00:50:56.057728 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 13 00:50:56.057838 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 13 00:50:56.057965 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 13 00:50:56.058117 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 13 00:50:56.058371 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Aug 13 00:50:56.058539 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Aug 13 00:50:56.059160 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 13 00:50:56.067494 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 13 00:50:56.067881 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 13 00:50:56.068109 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Aug 13 00:50:56.068216 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Aug 13 00:50:56.068346 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Aug 13 00:50:56.068441 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Aug 13 00:50:56.068533 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 13 00:50:56.068633 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:50:56.068756 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 00:50:56.068853 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Aug 13 00:50:56.068948 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Aug 13 00:50:56.069094 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Aug 13 00:50:56.069254 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 00:50:56.069437 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Aug 13 00:50:56.069603 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Aug 13 00:50:56.069754 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 13 00:50:56.070041 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Aug 13 00:50:56.070231 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Aug 13 00:50:56.070402 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Aug 13 00:50:56.070506 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 13 00:50:56.070648 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Aug 13 00:50:56.070767 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 00:50:56.070864 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Aug 13 00:50:56.071005 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Aug 13 00:50:56.071174 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Aug 13 00:50:56.071337 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Aug 13 00:50:56.071503 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Aug 13 00:50:56.071613 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Aug 13 00:50:56.071739 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Aug 13 00:50:56.071837 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Aug 13 00:50:56.071941 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Aug 13 00:50:56.071953 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:50:56.071964 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:50:56.071973 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:50:56.071982 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:50:56.071996 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 00:50:56.072006 kernel: iommu: Default domain type: Translated Aug 13 00:50:56.072016 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:50:56.072118 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 13 00:50:56.072216 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:50:56.079065 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 13 00:50:56.079121 kernel: vgaarb: loaded Aug 13 00:50:56.079133 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:50:56.079143 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:50:56.079163 kernel: PTP clock support registered Aug 13 00:50:56.079172 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:50:56.079181 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:50:56.079191 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 00:50:56.079216 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Aug 13 00:50:56.079231 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:50:56.079240 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:50:56.079249 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:50:56.079259 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:50:56.080356 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:50:56.080383 kernel: pnp: PnP ACPI init Aug 13 00:50:56.080393 kernel: pnp: PnP ACPI: found 4 devices Aug 13 00:50:56.080403 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:50:56.080413 kernel: NET: Registered PF_INET protocol family Aug 13 00:50:56.080422 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:50:56.080432 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 00:50:56.080442 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:50:56.080462 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:50:56.080472 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Aug 13 00:50:56.080482 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 00:50:56.080492 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 00:50:56.080501 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 00:50:56.080510 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:50:56.080520 kernel: NET: Registered PF_XDP protocol family Aug 13 00:50:56.080722 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:50:56.080839 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:50:56.080943 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:50:56.081030 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 13 00:50:56.081115 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 13 00:50:56.081226 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 13 00:50:56.081362 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 00:50:56.081461 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Aug 13 00:50:56.081474 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 13 00:50:56.081600 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 44368 usecs Aug 13 00:50:56.081629 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:50:56.081640 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 00:50:56.081651 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Aug 13 00:50:56.081660 kernel: Initialise system trusted keyrings Aug 13 00:50:56.081670 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 00:50:56.081684 kernel: Key type asymmetric registered Aug 13 00:50:56.081698 kernel: Asymmetric key parser 'x509' registered Aug 13 00:50:56.081708 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 00:50:56.081718 kernel: io scheduler mq-deadline registered Aug 13 00:50:56.081731 kernel: io scheduler kyber registered Aug 13 00:50:56.081741 kernel: io scheduler bfq registered Aug 13 00:50:56.081751 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:50:56.081768 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 13 00:50:56.081780 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 00:50:56.081789 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 00:50:56.081799 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:50:56.081808 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:50:56.081822 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:50:56.081840 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:50:56.081854 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:50:56.081868 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:50:56.082113 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 00:50:56.082245 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 00:50:56.084580 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T00:50:55 UTC (1755046255) Aug 13 00:50:56.084774 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 13 00:50:56.084809 kernel: intel_pstate: CPU model not supported Aug 13 00:50:56.084823 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:50:56.084838 kernel: Segment Routing with IPv6 Aug 13 00:50:56.084852 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:50:56.084868 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:50:56.084881 kernel: Key type dns_resolver registered Aug 13 00:50:56.084895 kernel: IPI shorthand broadcast: enabled Aug 13 00:50:56.084911 kernel: sched_clock: Marking stable (737003111, 111348952)->(975293536, -126941473) Aug 13 00:50:56.084925 kernel: registered taskstats version 1 Aug 13 00:50:56.084940 kernel: Loading compiled-in X.509 certificates Aug 13 00:50:56.084958 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 00:50:56.084973 kernel: Key type .fscrypt registered Aug 13 00:50:56.084985 kernel: Key type fscrypt-provisioning registered Aug 13 00:50:56.085000 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:50:56.085014 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:50:56.085028 kernel: ima: No architecture policies found Aug 13 00:50:56.085043 kernel: clk: Disabling unused clocks Aug 13 00:50:56.085056 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 00:50:56.085078 kernel: Write protecting the kernel read-only data: 28672k Aug 13 00:50:56.085090 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 00:50:56.085104 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 00:50:56.085117 kernel: Run /init as init process Aug 13 00:50:56.085130 kernel: with arguments: Aug 13 00:50:56.085145 kernel: /init Aug 13 00:50:56.085195 kernel: with environment: Aug 13 00:50:56.085213 kernel: HOME=/ Aug 13 00:50:56.085227 kernel: TERM=linux Aug 13 00:50:56.085242 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:50:56.086844 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:50:56.086936 systemd[1]: Detected virtualization kvm. Aug 13 00:50:56.086955 systemd[1]: Detected architecture x86-64. Aug 13 00:50:56.086973 systemd[1]: Running in initrd. Aug 13 00:50:56.086990 systemd[1]: No hostname configured, using default hostname. Aug 13 00:50:56.087007 systemd[1]: Hostname set to . Aug 13 00:50:56.087038 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:50:56.087056 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:50:56.087093 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:50:56.087110 systemd[1]: Reached target cryptsetup.target. Aug 13 00:50:56.087129 systemd[1]: Reached target paths.target. Aug 13 00:50:56.087146 systemd[1]: Reached target slices.target. Aug 13 00:50:56.087163 systemd[1]: Reached target swap.target. Aug 13 00:50:56.087180 systemd[1]: Reached target timers.target. Aug 13 00:50:56.087202 systemd[1]: Listening on iscsid.socket. Aug 13 00:50:56.087219 systemd[1]: Listening on iscsiuio.socket. Aug 13 00:50:56.087237 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:50:56.087255 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:50:56.087285 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:50:56.087311 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:50:56.087329 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:50:56.087347 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:50:56.087364 systemd[1]: Reached target sockets.target. Aug 13 00:50:56.087386 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:50:56.087404 systemd[1]: Finished network-cleanup.service. Aug 13 00:50:56.087426 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:50:56.087448 systemd[1]: Starting systemd-journald.service... Aug 13 00:50:56.087467 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:50:56.087488 systemd[1]: Starting systemd-resolved.service... Aug 13 00:50:56.087505 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 00:50:56.087523 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:50:56.087542 kernel: audit: type=1130 audit(1755046256.045:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.087562 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:50:56.087580 kernel: audit: type=1130 audit(1755046256.049:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.087598 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:50:56.087641 systemd-journald[183]: Journal started Aug 13 00:50:56.087790 systemd-journald[183]: Runtime Journal (/run/log/journal/e913d24d669b43758c299700ec58ab33) is 4.9M, max 39.5M, 34.5M free. Aug 13 00:50:56.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.074756 systemd-modules-load[184]: Inserted module 'overlay' Aug 13 00:50:56.098505 systemd-resolved[185]: Positive Trust Anchors: Aug 13 00:50:56.098525 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:50:56.121223 systemd[1]: Started systemd-journald.service. Aug 13 00:50:56.098583 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:50:56.131407 kernel: audit: type=1130 audit(1755046256.121:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.103538 systemd-resolved[185]: Defaulting to hostname 'linux'. Aug 13 00:50:56.122114 systemd[1]: Started systemd-resolved.service. Aug 13 00:50:56.142117 kernel: audit: type=1130 audit(1755046256.126:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.142172 kernel: audit: type=1130 audit(1755046256.127:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.142186 kernel: audit: type=1130 audit(1755046256.128:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.142199 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:50:56.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.127695 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 00:50:56.128376 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:50:56.128894 systemd[1]: Reached target nss-lookup.target. Aug 13 00:50:56.130755 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 00:50:56.157323 kernel: Bridge firewalling registered Aug 13 00:50:56.153866 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 13 00:50:56.168321 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 00:50:56.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.173015 systemd[1]: Starting dracut-cmdline.service... Aug 13 00:50:56.178841 kernel: audit: type=1130 audit(1755046256.168:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.185312 kernel: SCSI subsystem initialized Aug 13 00:50:56.192130 dracut-cmdline[201]: dracut-dracut-053 Aug 13 00:50:56.197700 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:50:56.211549 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:50:56.211648 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:50:56.213158 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 00:50:56.217380 systemd-modules-load[184]: Inserted module 'dm_multipath' Aug 13 00:50:56.218615 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:50:56.225172 kernel: audit: type=1130 audit(1755046256.220:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.221751 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:50:56.236885 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:50:56.241628 kernel: audit: type=1130 audit(1755046256.237:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.315339 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:50:56.338332 kernel: iscsi: registered transport (tcp) Aug 13 00:50:56.370321 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:50:56.370430 kernel: QLogic iSCSI HBA Driver Aug 13 00:50:56.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.432915 systemd[1]: Finished dracut-cmdline.service. Aug 13 00:50:56.435042 systemd[1]: Starting dracut-pre-udev.service... Aug 13 00:50:56.501367 kernel: raid6: avx2x4 gen() 14162 MB/s Aug 13 00:50:56.518420 kernel: raid6: avx2x4 xor() 7750 MB/s Aug 13 00:50:56.535345 kernel: raid6: avx2x2 gen() 14040 MB/s Aug 13 00:50:56.552388 kernel: raid6: avx2x2 xor() 15253 MB/s Aug 13 00:50:56.569350 kernel: raid6: avx2x1 gen() 10530 MB/s Aug 13 00:50:56.586361 kernel: raid6: avx2x1 xor() 13796 MB/s Aug 13 00:50:56.603359 kernel: raid6: sse2x4 gen() 10634 MB/s Aug 13 00:50:56.620367 kernel: raid6: sse2x4 xor() 5785 MB/s Aug 13 00:50:56.637357 kernel: raid6: sse2x2 gen() 9994 MB/s Aug 13 00:50:56.654361 kernel: raid6: sse2x2 xor() 7011 MB/s Aug 13 00:50:56.671373 kernel: raid6: sse2x1 gen() 9039 MB/s Aug 13 00:50:56.688966 kernel: raid6: sse2x1 xor() 5210 MB/s Aug 13 00:50:56.689055 kernel: raid6: using algorithm avx2x4 gen() 14162 MB/s Aug 13 00:50:56.689069 kernel: raid6: .... xor() 7750 MB/s, rmw enabled Aug 13 00:50:56.689665 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:50:56.707341 kernel: xor: automatically using best checksumming function avx Aug 13 00:50:56.838744 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 00:50:56.856541 systemd[1]: Finished dracut-pre-udev.service. Aug 13 00:50:56.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.857000 audit: BPF prog-id=7 op=LOAD Aug 13 00:50:56.857000 audit: BPF prog-id=8 op=LOAD Aug 13 00:50:56.858796 systemd[1]: Starting systemd-udevd.service... Aug 13 00:50:56.876711 systemd-udevd[383]: Using default interface naming scheme 'v252'. Aug 13 00:50:56.883985 systemd[1]: Started systemd-udevd.service. Aug 13 00:50:56.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.888952 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 00:50:56.912507 dracut-pre-trigger[394]: rd.md=0: removing MD RAID activation Aug 13 00:50:56.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:56.970371 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 00:50:56.973739 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:50:57.055132 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:50:57.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:57.140338 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 13 00:50:57.211666 kernel: scsi host0: Virtio SCSI HBA Aug 13 00:50:57.211885 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:50:57.211917 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:50:57.211936 kernel: GPT:9289727 != 125829119 Aug 13 00:50:57.211952 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:50:57.211968 kernel: GPT:9289727 != 125829119 Aug 13 00:50:57.211984 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:50:57.211999 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:50:57.217359 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Aug 13 00:50:57.263308 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (433) Aug 13 00:50:57.280945 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 00:50:57.312566 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 00:50:57.312593 kernel: AES CTR mode by8 optimization enabled Aug 13 00:50:57.315526 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 00:50:57.316145 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 00:50:57.328953 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 00:50:57.331309 kernel: libata version 3.00 loaded. Aug 13 00:50:57.331999 systemd[1]: Starting disk-uuid.service... Aug 13 00:50:57.338887 kernel: ACPI: bus type USB registered Aug 13 00:50:57.338977 kernel: usbcore: registered new interface driver usbfs Aug 13 00:50:57.338995 kernel: usbcore: registered new interface driver hub Aug 13 00:50:57.339506 kernel: usbcore: registered new device driver usb Aug 13 00:50:57.342888 disk-uuid[459]: Primary Header is updated. Aug 13 00:50:57.342888 disk-uuid[459]: Secondary Entries is updated. Aug 13 00:50:57.342888 disk-uuid[459]: Secondary Header is updated. Aug 13 00:50:57.356494 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 13 00:50:57.385168 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Aug 13 00:50:57.385191 kernel: scsi host1: ata_piix Aug 13 00:50:57.385416 kernel: ehci-pci: EHCI PCI platform driver Aug 13 00:50:57.385431 kernel: scsi host2: ata_piix Aug 13 00:50:57.385557 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Aug 13 00:50:57.385570 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Aug 13 00:50:57.360556 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:50:57.399306 kernel: uhci_hcd: USB Universal Host Controller Interface driver Aug 13 00:50:57.448421 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Aug 13 00:50:57.452602 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Aug 13 00:50:57.452774 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Aug 13 00:50:57.452903 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Aug 13 00:50:57.453062 kernel: hub 1-0:1.0: USB hub found Aug 13 00:50:57.453241 kernel: hub 1-0:1.0: 2 ports detected Aug 13 00:50:58.360330 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:50:58.360427 disk-uuid[461]: The operation has completed successfully. Aug 13 00:50:58.422036 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:50:58.422248 systemd[1]: Finished disk-uuid.service. Aug 13 00:50:58.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:58.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:58.424169 systemd[1]: Starting verity-setup.service... Aug 13 00:50:58.451570 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 00:50:58.518273 systemd[1]: Found device dev-mapper-usr.device. Aug 13 00:50:58.521627 systemd[1]: Mounting sysusr-usr.mount... Aug 13 00:50:58.523538 systemd[1]: Finished verity-setup.service. Aug 13 00:50:58.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:58.641773 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 00:50:58.643461 systemd[1]: Mounted sysusr-usr.mount. Aug 13 00:50:58.645121 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 00:50:58.647847 systemd[1]: Starting ignition-setup.service... Aug 13 00:50:58.651066 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 00:50:58.674552 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:50:58.674660 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:50:58.674680 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:50:58.695949 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:50:58.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:58.707720 systemd[1]: Finished ignition-setup.service. Aug 13 00:50:58.711292 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 00:50:58.891006 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 00:50:58.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:58.893000 audit: BPF prog-id=9 op=LOAD Aug 13 00:50:58.894704 systemd[1]: Starting systemd-networkd.service... Aug 13 00:50:58.921479 ignition[602]: Ignition 2.14.0 Aug 13 00:50:58.922803 ignition[602]: Stage: fetch-offline Aug 13 00:50:58.922983 ignition[602]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:50:58.923033 ignition[602]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:50:58.931791 ignition[602]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:50:58.933799 ignition[602]: parsed url from cmdline: "" Aug 13 00:50:58.934117 ignition[602]: no config URL provided Aug 13 00:50:58.934725 ignition[602]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:50:58.935512 ignition[602]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:50:58.936077 ignition[602]: failed to fetch config: resource requires networking Aug 13 00:50:58.937297 ignition[602]: Ignition finished successfully Aug 13 00:50:58.939913 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 00:50:58.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:58.944596 systemd-networkd[688]: lo: Link UP Aug 13 00:50:58.944624 systemd-networkd[688]: lo: Gained carrier Aug 13 00:50:58.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:58.945701 systemd-networkd[688]: Enumeration completed Aug 13 00:50:58.945929 systemd[1]: Started systemd-networkd.service. Aug 13 00:50:58.946792 systemd-networkd[688]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:50:58.947605 systemd[1]: Reached target network.target. Aug 13 00:50:58.950262 systemd[1]: Starting ignition-fetch.service... Aug 13 00:50:58.951937 systemd-networkd[688]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Aug 13 00:50:58.953426 systemd-networkd[688]: eth1: Link UP Aug 13 00:50:58.953434 systemd-networkd[688]: eth1: Gained carrier Aug 13 00:50:58.960619 systemd[1]: Starting iscsiuio.service... Aug 13 00:50:58.971107 ignition[690]: Ignition 2.14.0 Aug 13 00:50:58.975921 systemd-networkd[688]: eth0: Link UP Aug 13 00:50:58.971120 ignition[690]: Stage: fetch Aug 13 00:50:58.975936 systemd-networkd[688]: eth0: Gained carrier Aug 13 00:50:58.971387 ignition[690]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:50:58.971423 ignition[690]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:50:58.974777 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:50:58.974978 ignition[690]: parsed url from cmdline: "" Aug 13 00:50:58.974984 ignition[690]: no config URL provided Aug 13 00:50:58.974994 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:50:58.975010 ignition[690]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:50:58.975063 ignition[690]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Aug 13 00:50:58.991664 ignition[690]: GET error: Get "http://169.254.169.254/metadata/v1/user-data": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 00:50:58.993729 systemd[1]: Started iscsiuio.service. Aug 13 00:50:58.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:58.996386 systemd[1]: Starting iscsid.service... Aug 13 00:50:58.997237 systemd-networkd[688]: eth1: DHCPv4 address 10.124.0.11/20 acquired from 169.254.169.253 Aug 13 00:50:59.003225 iscsid[698]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:50:59.003225 iscsid[698]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 00:50:59.003225 iscsid[698]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 00:50:59.003225 iscsid[698]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 00:50:59.003225 iscsid[698]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:50:59.009056 iscsid[698]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 00:50:59.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:59.007129 systemd[1]: Started iscsid.service. Aug 13 00:50:59.009548 systemd[1]: Starting dracut-initqueue.service... Aug 13 00:50:59.010526 systemd-networkd[688]: eth0: DHCPv4 address 143.198.60.143/20, gateway 143.198.48.1 acquired from 169.254.169.253 Aug 13 00:50:59.036702 systemd[1]: Finished dracut-initqueue.service. Aug 13 00:50:59.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:59.037554 systemd[1]: Reached target remote-fs-pre.target. Aug 13 00:50:59.038331 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:50:59.039215 systemd[1]: Reached target remote-fs.target. Aug 13 00:50:59.041338 systemd[1]: Starting dracut-pre-mount.service... Aug 13 00:50:59.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:59.059303 systemd[1]: Finished dracut-pre-mount.service. Aug 13 00:50:59.193686 ignition[690]: GET http://169.254.169.254/metadata/v1/user-data: attempt #2 Aug 13 00:50:59.219747 ignition[690]: GET result: OK Aug 13 00:50:59.219971 ignition[690]: parsing config with SHA512: 3d8c358267782f042bb0a795260ebdbf02926919acf3090e69e309c18d546b39c1b7c24d3e76358712039057d4ea41cdfc0db6f117e84c95d49b941c1b6361a0 Aug 13 00:50:59.233580 unknown[690]: fetched base config from "system" Aug 13 00:50:59.233600 unknown[690]: fetched base config from "system" Aug 13 00:50:59.234469 ignition[690]: fetch: fetch complete Aug 13 00:50:59.233609 unknown[690]: fetched user config from "digitalocean" Aug 13 00:50:59.234479 ignition[690]: fetch: fetch passed Aug 13 00:50:59.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:59.236413 systemd[1]: Finished ignition-fetch.service. Aug 13 00:50:59.234575 ignition[690]: Ignition finished successfully Aug 13 00:50:59.238901 systemd[1]: Starting ignition-kargs.service... Aug 13 00:50:59.262572 ignition[713]: Ignition 2.14.0 Aug 13 00:50:59.263673 ignition[713]: Stage: kargs Aug 13 00:50:59.264528 ignition[713]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:50:59.265569 ignition[713]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:50:59.268810 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:50:59.273205 ignition[713]: kargs: kargs passed Aug 13 00:50:59.275561 ignition[713]: Ignition finished successfully Aug 13 00:50:59.277587 systemd[1]: Finished ignition-kargs.service. Aug 13 00:50:59.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:59.279865 systemd[1]: Starting ignition-disks.service... Aug 13 00:50:59.293978 ignition[719]: Ignition 2.14.0 Aug 13 00:50:59.294970 ignition[719]: Stage: disks Aug 13 00:50:59.295773 ignition[719]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:50:59.296442 ignition[719]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:50:59.300196 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:50:59.303395 ignition[719]: disks: disks passed Aug 13 00:50:59.303536 ignition[719]: Ignition finished successfully Aug 13 00:50:59.305547 systemd[1]: Finished ignition-disks.service. Aug 13 00:50:59.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:59.306334 systemd[1]: Reached target initrd-root-device.target. Aug 13 00:50:59.307012 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:50:59.307968 systemd[1]: Reached target local-fs.target. Aug 13 00:50:59.308808 systemd[1]: Reached target sysinit.target. Aug 13 00:50:59.309633 systemd[1]: Reached target basic.target. Aug 13 00:50:59.312238 systemd[1]: Starting systemd-fsck-root.service... Aug 13 00:50:59.337410 systemd-fsck[727]: ROOT: clean, 629/553520 files, 56027/553472 blocks Aug 13 00:50:59.340625 systemd[1]: Finished systemd-fsck-root.service. Aug 13 00:50:59.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:59.342761 systemd[1]: Mounting sysroot.mount... Aug 13 00:50:59.355379 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 00:50:59.356459 systemd[1]: Mounted sysroot.mount. Aug 13 00:50:59.357762 systemd[1]: Reached target initrd-root-fs.target. Aug 13 00:50:59.361222 systemd[1]: Mounting sysroot-usr.mount... Aug 13 00:50:59.363838 systemd[1]: Starting flatcar-digitalocean-network.service... Aug 13 00:50:59.367089 systemd[1]: Starting flatcar-metadata-hostname.service... Aug 13 00:50:59.368260 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:50:59.369172 systemd[1]: Reached target ignition-diskful.target. Aug 13 00:50:59.372494 systemd[1]: Mounted sysroot-usr.mount. Aug 13 00:50:59.375107 systemd[1]: Starting initrd-setup-root.service... Aug 13 00:50:59.387263 initrd-setup-root[739]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:50:59.406613 initrd-setup-root[747]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:50:59.420808 initrd-setup-root[755]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:50:59.436613 initrd-setup-root[765]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:50:59.542921 systemd[1]: Finished initrd-setup-root.service. Aug 13 00:50:59.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:59.545205 systemd[1]: Starting ignition-mount.service... Aug 13 00:50:59.547383 systemd[1]: Starting sysroot-boot.service... Aug 13 00:50:59.551719 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:50:59.558040 coreos-metadata[734]: Aug 13 00:50:59.556 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:50:59.570377 bash[785]: umount: /sysroot/usr/share/oem: not mounted. Aug 13 00:50:59.580776 coreos-metadata[734]: Aug 13 00:50:59.577 INFO Fetch successful Aug 13 00:50:59.581432 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (784) Aug 13 00:50:59.584756 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:50:59.584844 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:50:59.584859 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:50:59.589943 coreos-metadata[734]: Aug 13 00:50:59.589 INFO wrote hostname ci-3510.3.8-a-e4f4484119 to /sysroot/etc/hostname Aug 13 00:50:59.595756 systemd[1]: Finished flatcar-metadata-hostname.service. Aug 13 00:50:59.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:59.609140 coreos-metadata[733]: Aug 13 00:50:59.609 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:50:59.614763 ignition[787]: INFO : Ignition 2.14.0 Aug 13 00:50:59.614763 ignition[787]: INFO : Stage: mount Aug 13 00:50:59.616222 ignition[787]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:50:59.616222 ignition[787]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:50:59.619021 ignition[787]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:50:59.626872 coreos-metadata[733]: Aug 13 00:50:59.626 INFO Fetch successful Aug 13 00:50:59.627116 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:50:59.635381 systemd[1]: Finished sysroot-boot.service. Aug 13 00:50:59.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:59.642039 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Aug 13 00:50:59.642211 systemd[1]: Finished flatcar-digitalocean-network.service. Aug 13 00:50:59.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:59.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:59.644264 ignition[787]: INFO : mount: mount passed Aug 13 00:50:59.644264 ignition[787]: INFO : Ignition finished successfully Aug 13 00:50:59.645340 systemd[1]: Finished ignition-mount.service. Aug 13 00:50:59.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:50:59.660396 systemd[1]: Starting ignition-files.service... Aug 13 00:50:59.694155 ignition[814]: INFO : Ignition 2.14.0 Aug 13 00:50:59.694155 ignition[814]: INFO : Stage: files Aug 13 00:50:59.695778 ignition[814]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:50:59.695778 ignition[814]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:50:59.697759 ignition[814]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:50:59.699895 ignition[814]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:50:59.701674 ignition[814]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:50:59.701674 ignition[814]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:50:59.706426 ignition[814]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:50:59.707972 ignition[814]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:50:59.710499 ignition[814]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:50:59.709310 unknown[814]: wrote ssh authorized keys file for user: core Aug 13 00:50:59.713019 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 00:50:59.713019 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Aug 13 00:50:59.780676 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:50:59.927360 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 00:50:59.928567 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:50:59.928567 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 00:50:59.971712 systemd-networkd[688]: eth1: Gained IPv6LL Aug 13 00:51:00.139342 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:51:00.245929 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:51:00.247139 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:51:00.248643 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:51:00.249612 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:51:00.250523 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:51:00.250523 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:51:00.250523 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:51:00.250523 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:51:00.250523 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:51:00.255287 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:51:00.255287 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:51:00.255287 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:51:00.255287 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:51:00.255287 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:51:00.255287 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 13 00:51:00.656956 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:51:00.868727 systemd-networkd[688]: eth0: Gained IPv6LL Aug 13 00:51:01.076570 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:51:01.077745 ignition[814]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Aug 13 00:51:01.078472 ignition[814]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Aug 13 00:51:01.079095 ignition[814]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Aug 13 00:51:01.080130 ignition[814]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:51:01.081628 ignition[814]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:51:01.082610 ignition[814]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Aug 13 00:51:01.082610 ignition[814]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Aug 13 00:51:01.082610 ignition[814]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Aug 13 00:51:01.082610 ignition[814]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:51:01.086119 ignition[814]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:51:01.093229 ignition[814]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:51:01.094764 ignition[814]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:51:01.094764 ignition[814]: INFO : files: files passed Aug 13 00:51:01.094764 ignition[814]: INFO : Ignition finished successfully Aug 13 00:51:01.096604 systemd[1]: Finished ignition-files.service. Aug 13 00:51:01.108694 kernel: kauditd_printk_skb: 29 callbacks suppressed Aug 13 00:51:01.113449 kernel: audit: type=1130 audit(1755046261.096:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.113496 kernel: audit: type=1130 audit(1755046261.110:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.098754 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 00:51:01.118487 kernel: audit: type=1131 audit(1755046261.113:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.101861 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 00:51:01.122958 kernel: audit: type=1130 audit(1755046261.118:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.123239 initrd-setup-root-after-ignition[839]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:51:01.103537 systemd[1]: Starting ignition-quench.service... Aug 13 00:51:01.109648 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:51:01.109814 systemd[1]: Finished ignition-quench.service. Aug 13 00:51:01.113995 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 00:51:01.119567 systemd[1]: Reached target ignition-complete.target. Aug 13 00:51:01.124558 systemd[1]: Starting initrd-parse-etc.service... Aug 13 00:51:01.152719 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:51:01.160588 kernel: audit: type=1130 audit(1755046261.153:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.160635 kernel: audit: type=1131 audit(1755046261.153:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.152911 systemd[1]: Finished initrd-parse-etc.service. Aug 13 00:51:01.153742 systemd[1]: Reached target initrd-fs.target. Aug 13 00:51:01.160998 systemd[1]: Reached target initrd.target. Aug 13 00:51:01.162127 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 00:51:01.163605 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 00:51:01.187194 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 00:51:01.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.189239 systemd[1]: Starting initrd-cleanup.service... Aug 13 00:51:01.193649 kernel: audit: type=1130 audit(1755046261.187:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.204336 systemd[1]: Stopped target nss-lookup.target. Aug 13 00:51:01.205707 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 00:51:01.206894 systemd[1]: Stopped target timers.target. Aug 13 00:51:01.207948 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:51:01.212006 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 00:51:01.220343 kernel: audit: type=1131 audit(1755046261.216:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.216986 systemd[1]: Stopped target initrd.target. Aug 13 00:51:01.220925 systemd[1]: Stopped target basic.target. Aug 13 00:51:01.221826 systemd[1]: Stopped target ignition-complete.target. Aug 13 00:51:01.222694 systemd[1]: Stopped target ignition-diskful.target. Aug 13 00:51:01.223548 systemd[1]: Stopped target initrd-root-device.target. Aug 13 00:51:01.224644 systemd[1]: Stopped target remote-fs.target. Aug 13 00:51:01.225390 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 00:51:01.226433 systemd[1]: Stopped target sysinit.target. Aug 13 00:51:01.227310 systemd[1]: Stopped target local-fs.target. Aug 13 00:51:01.228008 systemd[1]: Stopped target local-fs-pre.target. Aug 13 00:51:01.228824 systemd[1]: Stopped target swap.target. Aug 13 00:51:01.229616 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:51:01.230014 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 00:51:01.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.231192 systemd[1]: Stopped target cryptsetup.target. Aug 13 00:51:01.234317 kernel: audit: type=1131 audit(1755046261.230:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.234217 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:51:01.234519 systemd[1]: Stopped dracut-initqueue.service. Aug 13 00:51:01.238462 kernel: audit: type=1131 audit(1755046261.235:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.235782 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:51:01.236070 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 00:51:01.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.240794 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:51:01.241582 systemd[1]: Stopped ignition-files.service. Aug 13 00:51:01.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.242957 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:51:01.243932 systemd[1]: Stopped flatcar-metadata-hostname.service. Aug 13 00:51:01.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.247051 systemd[1]: Stopping ignition-mount.service... Aug 13 00:51:01.248530 systemd[1]: Stopping iscsid.service... Aug 13 00:51:01.249425 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:51:01.250300 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 00:51:01.252764 iscsid[698]: iscsid shutting down. Aug 13 00:51:01.254088 systemd[1]: Stopping sysroot-boot.service... Aug 13 00:51:01.255265 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:51:01.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.258939 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 00:51:01.260521 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:51:01.261541 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 00:51:01.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.265660 systemd[1]: iscsid.service: Deactivated successfully. Aug 13 00:51:01.267111 systemd[1]: Stopped iscsid.service. Aug 13 00:51:01.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.270416 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:51:01.271515 systemd[1]: Finished initrd-cleanup.service. Aug 13 00:51:01.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.280048 systemd[1]: Stopping iscsiuio.service... Aug 13 00:51:01.281662 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 13 00:51:01.282718 systemd[1]: Stopped iscsiuio.service. Aug 13 00:51:01.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.288399 ignition[852]: INFO : Ignition 2.14.0 Aug 13 00:51:01.288399 ignition[852]: INFO : Stage: umount Aug 13 00:51:01.288399 ignition[852]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:01.288399 ignition[852]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:51:01.291101 ignition[852]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:51:01.293383 ignition[852]: INFO : umount: umount passed Aug 13 00:51:01.293383 ignition[852]: INFO : Ignition finished successfully Aug 13 00:51:01.294706 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:51:01.294844 systemd[1]: Stopped ignition-mount.service. Aug 13 00:51:01.298428 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:51:01.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.299034 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:51:01.299114 systemd[1]: Stopped ignition-disks.service. Aug 13 00:51:01.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.303401 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:51:01.304261 systemd[1]: Stopped ignition-kargs.service. Aug 13 00:51:01.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.305546 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:51:01.305633 systemd[1]: Stopped ignition-fetch.service. Aug 13 00:51:01.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.306934 systemd[1]: Stopped target network.target. Aug 13 00:51:01.307750 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:51:01.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.307845 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 00:51:01.308724 systemd[1]: Stopped target paths.target. Aug 13 00:51:01.309690 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:51:01.313427 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 00:51:01.314188 systemd[1]: Stopped target slices.target. Aug 13 00:51:01.315215 systemd[1]: Stopped target sockets.target. Aug 13 00:51:01.316177 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:51:01.316255 systemd[1]: Closed iscsid.socket. Aug 13 00:51:01.316854 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:51:01.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.316919 systemd[1]: Closed iscsiuio.socket. Aug 13 00:51:01.317603 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:51:01.317688 systemd[1]: Stopped ignition-setup.service. Aug 13 00:51:01.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.318919 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:51:01.319637 systemd[1]: Stopping systemd-resolved.service... Aug 13 00:51:01.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.320722 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:51:01.320881 systemd[1]: Stopped sysroot-boot.service. Aug 13 00:51:01.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.322448 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:51:01.322533 systemd[1]: Stopped initrd-setup-root.service. Aug 13 00:51:01.330000 audit: BPF prog-id=6 op=UNLOAD Aug 13 00:51:01.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.323523 systemd-networkd[688]: eth1: DHCPv6 lease lost Aug 13 00:51:01.326571 systemd-networkd[688]: eth0: DHCPv6 lease lost Aug 13 00:51:01.331000 audit: BPF prog-id=9 op=UNLOAD Aug 13 00:51:01.326826 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:51:01.327072 systemd[1]: Stopped systemd-resolved.service. Aug 13 00:51:01.329759 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:51:01.330046 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:51:01.332007 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:51:01.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.332070 systemd[1]: Closed systemd-networkd.socket. Aug 13 00:51:01.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.334212 systemd[1]: Stopping network-cleanup.service... Aug 13 00:51:01.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.335311 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:51:01.335451 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 00:51:01.338934 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:51:01.339036 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:51:01.339997 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:51:01.340079 systemd[1]: Stopped systemd-modules-load.service. Aug 13 00:51:01.348014 systemd[1]: Stopping systemd-udevd.service... Aug 13 00:51:01.350601 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:51:01.357980 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:51:01.358196 systemd[1]: Stopped network-cleanup.service. Aug 13 00:51:01.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.359882 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:51:01.360105 systemd[1]: Stopped systemd-udevd.service. Aug 13 00:51:01.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.361710 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:51:01.361792 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 00:51:01.363186 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:51:01.363252 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 00:51:01.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.364229 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:51:01.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.364403 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 00:51:01.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.373569 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:51:01.373679 systemd[1]: Stopped dracut-cmdline.service. Aug 13 00:51:01.374457 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:51:01.374545 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 00:51:01.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.376823 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 00:51:01.377504 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:51:01.377619 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 00:51:01.390538 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:51:01.390708 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 00:51:01.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.392137 systemd[1]: Reached target initrd-switch-root.target. Aug 13 00:51:01.393800 systemd[1]: Starting initrd-switch-root.service... Aug 13 00:51:01.412418 systemd[1]: Switching root. Aug 13 00:51:01.436452 systemd-journald[183]: Journal stopped Aug 13 00:51:05.790829 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Aug 13 00:51:05.790947 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 00:51:05.790973 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 00:51:05.790993 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 00:51:05.791015 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:51:05.791034 kernel: SELinux: policy capability open_perms=1 Aug 13 00:51:05.791054 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:51:05.791074 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:51:05.791098 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:51:05.791124 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:51:05.791143 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:51:05.791164 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:51:05.791183 systemd[1]: Successfully loaded SELinux policy in 53.125ms. Aug 13 00:51:05.791214 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.948ms. Aug 13 00:51:05.791229 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:51:05.791242 systemd[1]: Detected virtualization kvm. Aug 13 00:51:05.791284 systemd[1]: Detected architecture x86-64. Aug 13 00:51:05.791331 systemd[1]: Detected first boot. Aug 13 00:51:05.791360 systemd[1]: Hostname set to . Aug 13 00:51:05.791381 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:51:05.791401 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 00:51:05.791441 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:51:05.791460 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:51:05.791480 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:51:05.791509 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:51:05.791558 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:51:05.791580 systemd[1]: Stopped initrd-switch-root.service. Aug 13 00:51:05.791601 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:51:05.791623 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 00:51:05.791643 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 00:51:05.791660 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Aug 13 00:51:05.791704 systemd[1]: Created slice system-getty.slice. Aug 13 00:51:05.791731 systemd[1]: Created slice system-modprobe.slice. Aug 13 00:51:05.791752 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 00:51:05.791772 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 00:51:05.791792 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 00:51:05.791805 systemd[1]: Created slice user.slice. Aug 13 00:51:05.791821 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:51:05.791841 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 00:51:05.791861 systemd[1]: Set up automount boot.automount. Aug 13 00:51:05.791878 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 00:51:05.791898 systemd[1]: Stopped target initrd-switch-root.target. Aug 13 00:51:05.791919 systemd[1]: Stopped target initrd-fs.target. Aug 13 00:51:05.791941 systemd[1]: Stopped target initrd-root-fs.target. Aug 13 00:51:05.791957 systemd[1]: Reached target integritysetup.target. Aug 13 00:51:05.791977 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:51:05.791992 systemd[1]: Reached target remote-fs.target. Aug 13 00:51:05.792011 systemd[1]: Reached target slices.target. Aug 13 00:51:05.792045 systemd[1]: Reached target swap.target. Aug 13 00:51:05.792084 systemd[1]: Reached target torcx.target. Aug 13 00:51:05.792106 systemd[1]: Reached target veritysetup.target. Aug 13 00:51:05.792121 systemd[1]: Listening on systemd-coredump.socket. Aug 13 00:51:05.792134 systemd[1]: Listening on systemd-initctl.socket. Aug 13 00:51:05.792150 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:51:05.792174 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:51:05.792196 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:51:05.792213 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 00:51:05.792240 systemd[1]: Mounting dev-hugepages.mount... Aug 13 00:51:05.792261 systemd[1]: Mounting dev-mqueue.mount... Aug 13 00:51:05.792275 systemd[1]: Mounting media.mount... Aug 13 00:51:05.792290 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:51:05.792317 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 00:51:05.792331 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 00:51:05.792349 systemd[1]: Mounting tmp.mount... Aug 13 00:51:05.792369 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 00:51:05.792387 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:51:05.792413 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:51:05.792435 systemd[1]: Starting modprobe@configfs.service... Aug 13 00:51:05.792453 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:51:05.792467 systemd[1]: Starting modprobe@drm.service... Aug 13 00:51:05.792487 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:51:05.792503 systemd[1]: Starting modprobe@fuse.service... Aug 13 00:51:05.792517 systemd[1]: Starting modprobe@loop.service... Aug 13 00:51:05.792531 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:51:05.792549 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:51:05.792570 systemd[1]: Stopped systemd-fsck-root.service. Aug 13 00:51:05.792585 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:51:05.792599 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:51:05.792612 systemd[1]: Stopped systemd-journald.service. Aug 13 00:51:05.792628 systemd[1]: Starting systemd-journald.service... Aug 13 00:51:05.792643 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:51:05.792657 systemd[1]: Starting systemd-network-generator.service... Aug 13 00:51:05.792670 systemd[1]: Starting systemd-remount-fs.service... Aug 13 00:51:05.792686 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:51:05.792710 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:51:05.792726 systemd[1]: Stopped verity-setup.service. Aug 13 00:51:05.792752 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:51:05.792776 systemd[1]: Mounted dev-hugepages.mount. Aug 13 00:51:05.792795 systemd[1]: Mounted dev-mqueue.mount. Aug 13 00:51:05.792817 systemd[1]: Mounted media.mount. Aug 13 00:51:05.792840 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 00:51:05.792862 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 00:51:05.792882 systemd[1]: Mounted tmp.mount. Aug 13 00:51:05.792909 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:51:05.792928 kernel: fuse: init (API version 7.34) Aug 13 00:51:05.792948 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:51:05.792966 systemd[1]: Finished modprobe@configfs.service. Aug 13 00:51:05.792980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:51:05.792994 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:51:05.793015 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:51:05.793040 systemd[1]: Finished modprobe@drm.service. Aug 13 00:51:05.793055 kernel: loop: module loaded Aug 13 00:51:05.793068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:51:05.793084 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:51:05.793106 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:51:05.793121 systemd[1]: Finished modprobe@fuse.service. Aug 13 00:51:05.793138 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:51:05.793165 systemd[1]: Finished modprobe@loop.service. Aug 13 00:51:05.793181 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:51:05.793196 systemd[1]: Finished systemd-network-generator.service. Aug 13 00:51:05.793210 systemd[1]: Finished systemd-remount-fs.service. Aug 13 00:51:05.793224 systemd[1]: Reached target network-pre.target. Aug 13 00:51:05.793237 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 00:51:05.793260 systemd-journald[954]: Journal started Aug 13 00:51:05.799499 systemd-journald[954]: Runtime Journal (/run/log/journal/e913d24d669b43758c299700ec58ab33) is 4.9M, max 39.5M, 34.5M free. Aug 13 00:51:05.799577 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 00:51:01.613000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:51:01.690000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:51:01.690000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:51:01.690000 audit: BPF prog-id=10 op=LOAD Aug 13 00:51:01.690000 audit: BPF prog-id=10 op=UNLOAD Aug 13 00:51:01.690000 audit: BPF prog-id=11 op=LOAD Aug 13 00:51:01.690000 audit: BPF prog-id=11 op=UNLOAD Aug 13 00:51:01.849000 audit[885]: AVC avc: denied { associate } for pid=885 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 00:51:01.849000 audit[885]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=868 pid=885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:01.849000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:51:01.851000 audit[885]: AVC avc: denied { associate } for pid=885 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Aug 13 00:51:01.851000 audit[885]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=868 pid=885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:01.851000 audit: CWD cwd="/" Aug 13 00:51:01.851000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:01.851000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:01.851000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:51:05.532000 audit: BPF prog-id=12 op=LOAD Aug 13 00:51:05.533000 audit: BPF prog-id=3 op=UNLOAD Aug 13 00:51:05.533000 audit: BPF prog-id=13 op=LOAD Aug 13 00:51:05.533000 audit: BPF prog-id=14 op=LOAD Aug 13 00:51:05.533000 audit: BPF prog-id=4 op=UNLOAD Aug 13 00:51:05.533000 audit: BPF prog-id=5 op=UNLOAD Aug 13 00:51:05.535000 audit: BPF prog-id=15 op=LOAD Aug 13 00:51:05.535000 audit: BPF prog-id=12 op=UNLOAD Aug 13 00:51:05.536000 audit: BPF prog-id=16 op=LOAD Aug 13 00:51:05.536000 audit: BPF prog-id=17 op=LOAD Aug 13 00:51:05.536000 audit: BPF prog-id=13 op=UNLOAD Aug 13 00:51:05.536000 audit: BPF prog-id=14 op=UNLOAD Aug 13 00:51:05.537000 audit: BPF prog-id=18 op=LOAD Aug 13 00:51:05.537000 audit: BPF prog-id=15 op=UNLOAD Aug 13 00:51:05.537000 audit: BPF prog-id=19 op=LOAD Aug 13 00:51:05.537000 audit: BPF prog-id=20 op=LOAD Aug 13 00:51:05.537000 audit: BPF prog-id=16 op=UNLOAD Aug 13 00:51:05.537000 audit: BPF prog-id=17 op=UNLOAD Aug 13 00:51:05.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.548000 audit: BPF prog-id=18 op=UNLOAD Aug 13 00:51:05.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.679000 audit: BPF prog-id=21 op=LOAD Aug 13 00:51:05.679000 audit: BPF prog-id=22 op=LOAD Aug 13 00:51:05.680000 audit: BPF prog-id=23 op=LOAD Aug 13 00:51:05.680000 audit: BPF prog-id=19 op=UNLOAD Aug 13 00:51:05.680000 audit: BPF prog-id=20 op=UNLOAD Aug 13 00:51:05.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.806566 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:51:05.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.810340 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 00:51:05.787000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:51:05.787000 audit[954]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffcbefc2b90 a2=4000 a3=7ffcbefc2c2c items=0 ppid=1 pid=954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:05.787000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:51:05.530059 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:51:01.845123 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:51:05.530084 systemd[1]: Unnecessary job was removed for dev-vda6.device. Aug 13 00:51:05.819385 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:51:05.819478 systemd[1]: Starting systemd-random-seed.service... Aug 13 00:51:05.819512 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:51:01.845829 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 00:51:05.538802 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:51:01.845955 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 00:51:01.846021 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:01Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Aug 13 00:51:01.846038 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:01Z" level=debug msg="skipped missing lower profile" missing profile=oem Aug 13 00:51:01.846096 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:01Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Aug 13 00:51:01.846120 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:01Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Aug 13 00:51:05.832320 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:51:05.832417 systemd[1]: Started systemd-journald.service. Aug 13 00:51:05.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:01.846435 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:01Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Aug 13 00:51:05.830974 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 00:51:01.846504 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 00:51:05.831666 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 00:51:01.846526 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 00:51:01.848495 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Aug 13 00:51:01.848555 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Aug 13 00:51:01.848584 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Aug 13 00:51:01.848611 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Aug 13 00:51:01.848643 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Aug 13 00:51:01.848665 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Aug 13 00:51:05.837221 systemd[1]: Starting systemd-journal-flush.service... Aug 13 00:51:04.947709 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:04Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:51:04.948326 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:04Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:51:04.948563 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:04Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:51:04.949013 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:04Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:51:04.949136 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:04Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Aug 13 00:51:04.949266 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-08-13T00:51:04Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Aug 13 00:51:05.851919 systemd[1]: Finished systemd-random-seed.service. Aug 13 00:51:05.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.852760 systemd[1]: Reached target first-boot-complete.target. Aug 13 00:51:05.860728 systemd-journald[954]: Time spent on flushing to /var/log/journal/e913d24d669b43758c299700ec58ab33 is 123.618ms for 1158 entries. Aug 13 00:51:05.860728 systemd-journald[954]: System Journal (/var/log/journal/e913d24d669b43758c299700ec58ab33) is 8.0M, max 195.6M, 187.6M free. Aug 13 00:51:05.993231 systemd-journald[954]: Received client request to flush runtime journal. Aug 13 00:51:05.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:05.888915 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:51:05.914337 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 00:51:05.917035 systemd[1]: Starting systemd-sysusers.service... Aug 13 00:51:05.973762 systemd[1]: Finished systemd-sysusers.service. Aug 13 00:51:05.977259 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:51:05.980520 systemd[1]: Starting systemd-udev-settle.service... Aug 13 00:51:05.994672 systemd[1]: Finished systemd-journal-flush.service. Aug 13 00:51:06.007633 udevadm[993]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:51:06.720761 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 00:51:06.725092 kernel: kauditd_printk_skb: 107 callbacks suppressed Aug 13 00:51:06.725295 kernel: audit: type=1130 audit(1755046266.721:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:06.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:06.728826 kernel: audit: type=1334 audit(1755046266.724:149): prog-id=24 op=LOAD Aug 13 00:51:06.728979 kernel: audit: type=1334 audit(1755046266.724:150): prog-id=25 op=LOAD Aug 13 00:51:06.724000 audit: BPF prog-id=24 op=LOAD Aug 13 00:51:06.724000 audit: BPF prog-id=25 op=LOAD Aug 13 00:51:06.727013 systemd[1]: Starting systemd-udevd.service... Aug 13 00:51:06.724000 audit: BPF prog-id=7 op=UNLOAD Aug 13 00:51:06.730521 kernel: audit: type=1334 audit(1755046266.724:151): prog-id=7 op=UNLOAD Aug 13 00:51:06.724000 audit: BPF prog-id=8 op=UNLOAD Aug 13 00:51:06.731699 kernel: audit: type=1334 audit(1755046266.724:152): prog-id=8 op=UNLOAD Aug 13 00:51:06.754863 systemd-udevd[995]: Using default interface naming scheme 'v252'. Aug 13 00:51:06.793767 systemd[1]: Started systemd-udevd.service. Aug 13 00:51:06.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:06.801409 kernel: audit: type=1130 audit(1755046266.794:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:06.800825 systemd[1]: Starting systemd-networkd.service... Aug 13 00:51:06.796000 audit: BPF prog-id=26 op=LOAD Aug 13 00:51:06.804304 kernel: audit: type=1334 audit(1755046266.796:154): prog-id=26 op=LOAD Aug 13 00:51:06.815022 kernel: audit: type=1334 audit(1755046266.809:155): prog-id=27 op=LOAD Aug 13 00:51:06.815166 kernel: audit: type=1334 audit(1755046266.810:156): prog-id=28 op=LOAD Aug 13 00:51:06.815205 kernel: audit: type=1334 audit(1755046266.810:157): prog-id=29 op=LOAD Aug 13 00:51:06.809000 audit: BPF prog-id=27 op=LOAD Aug 13 00:51:06.810000 audit: BPF prog-id=28 op=LOAD Aug 13 00:51:06.810000 audit: BPF prog-id=29 op=LOAD Aug 13 00:51:06.813571 systemd[1]: Starting systemd-userdbd.service... Aug 13 00:51:06.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:06.873921 systemd[1]: Started systemd-userdbd.service. Aug 13 00:51:06.898095 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:51:06.898558 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:51:06.900864 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:51:06.905633 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:51:06.908921 systemd[1]: Starting modprobe@loop.service... Aug 13 00:51:06.910450 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:51:06.910601 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:51:06.910769 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:51:06.911729 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:51:06.912093 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:51:06.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:06.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:06.917620 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:51:06.917896 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:51:06.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:06.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:06.919810 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:51:06.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:06.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:06.924487 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:51:06.924717 systemd[1]: Finished modprobe@loop.service. Aug 13 00:51:06.925597 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:51:06.987982 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Aug 13 00:51:07.039734 systemd-networkd[1002]: lo: Link UP Aug 13 00:51:07.039750 systemd-networkd[1002]: lo: Gained carrier Aug 13 00:51:07.041418 systemd-networkd[1002]: Enumeration completed Aug 13 00:51:07.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:07.041618 systemd-networkd[1002]: eth1: Configuring with /run/systemd/network/10-f6:26:51:5a:8d:8a.network. Aug 13 00:51:07.041655 systemd[1]: Started systemd-networkd.service. Aug 13 00:51:07.044644 systemd-networkd[1002]: eth0: Configuring with /run/systemd/network/10-4e:3b:d7:d0:3c:3d.network. Aug 13 00:51:07.046968 systemd-networkd[1002]: eth1: Link UP Aug 13 00:51:07.046984 systemd-networkd[1002]: eth1: Gained carrier Aug 13 00:51:07.052857 systemd-networkd[1002]: eth0: Link UP Aug 13 00:51:07.052874 systemd-networkd[1002]: eth0: Gained carrier Aug 13 00:51:07.068359 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 00:51:07.086313 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:51:07.077000 audit[1001]: AVC avc: denied { confidentiality } for pid=1001 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:51:07.077000 audit[1001]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=559222c05810 a1=338ac a2=7f56c7fd4bc5 a3=5 items=110 ppid=995 pid=1001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:07.077000 audit: CWD cwd="/" Aug 13 00:51:07.077000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=1 name=(null) inode=14344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=2 name=(null) inode=14344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=3 name=(null) inode=14345 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=4 name=(null) inode=14344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=5 name=(null) inode=14346 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=6 name=(null) inode=14344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=7 name=(null) inode=14347 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=8 name=(null) inode=14347 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=9 name=(null) inode=14348 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=10 name=(null) inode=14347 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=11 name=(null) inode=14349 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=12 name=(null) inode=14347 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=13 name=(null) inode=14350 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=14 name=(null) inode=14347 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=15 name=(null) inode=14351 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=16 name=(null) inode=14347 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=17 name=(null) inode=14352 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=18 name=(null) inode=14344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=19 name=(null) inode=14353 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=20 name=(null) inode=14353 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=21 name=(null) inode=14354 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=22 name=(null) inode=14353 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=23 name=(null) inode=14355 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=24 name=(null) inode=14353 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=25 name=(null) inode=14356 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=26 name=(null) inode=14353 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=27 name=(null) inode=14357 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=28 name=(null) inode=14353 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=29 name=(null) inode=14358 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=30 name=(null) inode=14344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=31 name=(null) inode=14359 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=32 name=(null) inode=14359 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=33 name=(null) inode=14360 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=34 name=(null) inode=14359 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=35 name=(null) inode=14361 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=36 name=(null) inode=14359 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=37 name=(null) inode=14362 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=38 name=(null) inode=14359 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=39 name=(null) inode=14363 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=40 name=(null) inode=14359 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=41 name=(null) inode=14364 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=42 name=(null) inode=14344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=43 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=44 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=45 name=(null) inode=14366 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=46 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=47 name=(null) inode=14367 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=48 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=49 name=(null) inode=14368 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=50 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=51 name=(null) inode=14369 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=52 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=53 name=(null) inode=14370 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=55 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=56 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=57 name=(null) inode=14372 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=58 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=59 name=(null) inode=14373 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=60 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=61 name=(null) inode=14374 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=62 name=(null) inode=14374 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=63 name=(null) inode=14375 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=64 name=(null) inode=14374 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=65 name=(null) inode=14376 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=66 name=(null) inode=14374 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=67 name=(null) inode=14377 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=68 name=(null) inode=14374 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=69 name=(null) inode=14378 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=70 name=(null) inode=14374 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=71 name=(null) inode=14379 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=72 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=73 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=74 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=75 name=(null) inode=14381 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=76 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=77 name=(null) inode=14382 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=78 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=79 name=(null) inode=14383 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=80 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=81 name=(null) inode=14384 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=82 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=83 name=(null) inode=14385 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=84 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=85 name=(null) inode=14386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=86 name=(null) inode=14386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=87 name=(null) inode=14387 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=88 name=(null) inode=14386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=89 name=(null) inode=14388 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=90 name=(null) inode=14386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=91 name=(null) inode=14389 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=92 name=(null) inode=14386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=93 name=(null) inode=14390 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=94 name=(null) inode=14386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=95 name=(null) inode=14391 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=96 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=97 name=(null) inode=14392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=98 name=(null) inode=14392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=99 name=(null) inode=14393 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=100 name=(null) inode=14392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=101 name=(null) inode=14394 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=102 name=(null) inode=14392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=103 name=(null) inode=14395 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=104 name=(null) inode=14392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=105 name=(null) inode=14396 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=106 name=(null) inode=14392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=107 name=(null) inode=14397 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PATH item=109 name=(null) inode=14398 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:07.077000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 00:51:07.116340 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 13 00:51:07.134317 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 00:51:07.144830 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:51:07.198313 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:51:07.345349 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:51:07.373074 systemd[1]: Finished systemd-udev-settle.service. Aug 13 00:51:07.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:07.375771 systemd[1]: Starting lvm2-activation-early.service... Aug 13 00:51:07.402646 lvm[1033]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:51:07.433313 systemd[1]: Finished lvm2-activation-early.service. Aug 13 00:51:07.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:07.434253 systemd[1]: Reached target cryptsetup.target. Aug 13 00:51:07.436683 systemd[1]: Starting lvm2-activation.service... Aug 13 00:51:07.444257 lvm[1034]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:51:07.473573 systemd[1]: Finished lvm2-activation.service. Aug 13 00:51:07.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:07.474521 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:51:07.477140 systemd[1]: Mounting media-configdrive.mount... Aug 13 00:51:07.477639 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:51:07.477699 systemd[1]: Reached target machines.target. Aug 13 00:51:07.479923 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 00:51:07.497308 kernel: ISO 9660 Extensions: RRIP_1991A Aug 13 00:51:07.499321 systemd[1]: Mounted media-configdrive.mount. Aug 13 00:51:07.499893 systemd[1]: Reached target local-fs.target. Aug 13 00:51:07.502314 systemd[1]: Starting ldconfig.service... Aug 13 00:51:07.503789 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:51:07.503900 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:51:07.508528 systemd[1]: Starting systemd-boot-update.service... Aug 13 00:51:07.513002 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 00:51:07.521389 systemd[1]: Starting systemd-sysext.service... Aug 13 00:51:07.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:07.525351 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 00:51:07.526720 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1040 (bootctl) Aug 13 00:51:07.530084 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 00:51:07.569532 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 00:51:07.578532 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:51:07.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:07.579503 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 00:51:07.583459 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 00:51:07.583719 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 00:51:07.606390 kernel: loop0: detected capacity change from 0 to 229808 Aug 13 00:51:07.644525 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:51:07.668313 kernel: loop1: detected capacity change from 0 to 229808 Aug 13 00:51:07.691454 (sd-sysext)[1050]: Using extensions 'kubernetes'. Aug 13 00:51:07.695400 (sd-sysext)[1050]: Merged extensions into '/usr'. Aug 13 00:51:07.698597 systemd-fsck[1047]: fsck.fat 4.2 (2021-01-31) Aug 13 00:51:07.698597 systemd-fsck[1047]: /dev/vda1: 789 files, 119324/258078 clusters Aug 13 00:51:07.704084 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 00:51:07.707177 systemd[1]: Mounting boot.mount... Aug 13 00:51:07.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:07.738455 systemd[1]: Mounted boot.mount. Aug 13 00:51:07.751378 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:51:07.754152 systemd[1]: Mounting usr-share-oem.mount... Aug 13 00:51:07.757043 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:51:07.760006 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:51:07.764862 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:51:07.770076 systemd[1]: Starting modprobe@loop.service... Aug 13 00:51:07.770929 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:51:07.771214 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:51:07.771519 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:51:07.775942 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:51:07.776204 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:51:07.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:07.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:07.778215 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:51:07.778487 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:51:07.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:07.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:07.779749 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:51:07.784385 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:51:07.784634 systemd[1]: Finished modprobe@loop.service. Aug 13 00:51:07.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:07.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:07.785724 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:51:07.796054 systemd[1]: Mounted usr-share-oem.mount. Aug 13 00:51:07.799804 systemd[1]: Finished systemd-boot-update.service. Aug 13 00:51:07.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:07.801530 systemd[1]: Finished systemd-sysext.service. Aug 13 00:51:07.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:07.808131 systemd[1]: Starting ensure-sysext.service... Aug 13 00:51:07.810826 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 00:51:07.827329 systemd[1]: Reloading. Aug 13 00:51:07.861007 systemd-tmpfiles[1058]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 00:51:07.876505 systemd-tmpfiles[1058]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:51:07.893768 systemd-tmpfiles[1058]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:51:08.028191 ldconfig[1039]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:51:08.044112 /usr/lib/systemd/system-generators/torcx-generator[1077]: time="2025-08-13T00:51:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:51:08.044146 /usr/lib/systemd/system-generators/torcx-generator[1077]: time="2025-08-13T00:51:08Z" level=info msg="torcx already run" Aug 13 00:51:08.215095 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:51:08.215411 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:51:08.249400 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:51:08.291603 systemd-networkd[1002]: eth0: Gained IPv6LL Aug 13 00:51:08.355522 systemd-networkd[1002]: eth1: Gained IPv6LL Aug 13 00:51:08.357000 audit: BPF prog-id=30 op=LOAD Aug 13 00:51:08.358000 audit: BPF prog-id=26 op=UNLOAD Aug 13 00:51:08.360000 audit: BPF prog-id=31 op=LOAD Aug 13 00:51:08.360000 audit: BPF prog-id=27 op=UNLOAD Aug 13 00:51:08.360000 audit: BPF prog-id=32 op=LOAD Aug 13 00:51:08.361000 audit: BPF prog-id=33 op=LOAD Aug 13 00:51:08.361000 audit: BPF prog-id=28 op=UNLOAD Aug 13 00:51:08.361000 audit: BPF prog-id=29 op=UNLOAD Aug 13 00:51:08.364000 audit: BPF prog-id=34 op=LOAD Aug 13 00:51:08.364000 audit: BPF prog-id=21 op=UNLOAD Aug 13 00:51:08.364000 audit: BPF prog-id=35 op=LOAD Aug 13 00:51:08.364000 audit: BPF prog-id=36 op=LOAD Aug 13 00:51:08.365000 audit: BPF prog-id=22 op=UNLOAD Aug 13 00:51:08.365000 audit: BPF prog-id=23 op=UNLOAD Aug 13 00:51:08.366000 audit: BPF prog-id=37 op=LOAD Aug 13 00:51:08.367000 audit: BPF prog-id=38 op=LOAD Aug 13 00:51:08.367000 audit: BPF prog-id=24 op=UNLOAD Aug 13 00:51:08.367000 audit: BPF prog-id=25 op=UNLOAD Aug 13 00:51:08.373158 systemd[1]: Finished ldconfig.service. Aug 13 00:51:08.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.376140 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 00:51:08.382636 systemd[1]: Starting audit-rules.service... Aug 13 00:51:08.385722 systemd[1]: Starting clean-ca-certificates.service... Aug 13 00:51:08.391309 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 00:51:08.394000 audit: BPF prog-id=39 op=LOAD Aug 13 00:51:08.400000 audit: BPF prog-id=40 op=LOAD Aug 13 00:51:08.397422 systemd[1]: Starting systemd-resolved.service... Aug 13 00:51:08.402674 systemd[1]: Starting systemd-timesyncd.service... Aug 13 00:51:08.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.405896 systemd[1]: Starting systemd-update-utmp.service... Aug 13 00:51:08.408391 systemd[1]: Finished clean-ca-certificates.service. Aug 13 00:51:08.414698 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:51:08.423049 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:51:08.425120 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:51:08.429882 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:51:08.432607 systemd[1]: Starting modprobe@loop.service... Aug 13 00:51:08.433500 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:51:08.433727 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:51:08.434014 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:51:08.438459 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:51:08.438675 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:51:08.438813 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:51:08.438953 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:51:08.440729 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:51:08.440970 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:51:08.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.442245 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:51:08.443571 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:51:08.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.444703 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:51:08.448792 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:51:08.452946 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:51:08.456885 systemd[1]: Starting modprobe@drm.service... Aug 13 00:51:08.461515 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:51:08.462252 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:51:08.462567 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:51:08.466607 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:51:08.472000 audit[1131]: SYSTEM_BOOT pid=1131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.477622 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:51:08.479567 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:51:08.479813 systemd[1]: Finished modprobe@loop.service. Aug 13 00:51:08.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.483465 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:51:08.483700 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:51:08.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.488040 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:51:08.490143 systemd[1]: Finished ensure-sysext.service. Aug 13 00:51:08.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.492577 systemd[1]: Finished systemd-update-utmp.service. Aug 13 00:51:08.499485 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:51:08.499510 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:51:08.506113 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:51:08.506400 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:51:08.507151 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:51:08.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.508829 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:51:08.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.514123 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:51:08.514445 systemd[1]: Finished modprobe@drm.service. Aug 13 00:51:08.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.530155 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 00:51:08.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.533023 systemd[1]: Starting systemd-update-done.service... Aug 13 00:51:08.554032 systemd[1]: Finished systemd-update-done.service. Aug 13 00:51:08.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:08.568000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 00:51:08.568000 audit[1154]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff92a067f0 a2=420 a3=0 items=0 ppid=1125 pid=1154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:08.568000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 00:51:08.568862 augenrules[1154]: No rules Aug 13 00:51:08.569896 systemd[1]: Finished audit-rules.service. Aug 13 00:51:08.604753 systemd[1]: Started systemd-timesyncd.service. Aug 13 00:51:08.605553 systemd[1]: Reached target time-set.target. Aug 13 00:51:08.607827 systemd-resolved[1128]: Positive Trust Anchors: Aug 13 00:51:08.607878 systemd-resolved[1128]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:51:08.607930 systemd-resolved[1128]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:51:08.618003 systemd-resolved[1128]: Using system hostname 'ci-3510.3.8-a-e4f4484119'. Aug 13 00:51:08.621446 systemd[1]: Started systemd-resolved.service. Aug 13 00:51:08.622138 systemd[1]: Reached target network.target. Aug 13 00:51:08.622730 systemd[1]: Reached target network-online.target. Aug 13 00:51:08.623235 systemd[1]: Reached target nss-lookup.target. Aug 13 00:51:08.623801 systemd[1]: Reached target sysinit.target. Aug 13 00:51:08.624549 systemd[1]: Started motdgen.path. Aug 13 00:51:08.625064 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 00:51:08.625944 systemd[1]: Started logrotate.timer. Aug 13 00:51:08.626621 systemd[1]: Started mdadm.timer. Aug 13 00:51:08.627063 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 00:51:08.627643 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:51:08.627690 systemd[1]: Reached target paths.target. Aug 13 00:51:08.628141 systemd[1]: Reached target timers.target. Aug 13 00:51:08.629156 systemd[1]: Listening on dbus.socket. Aug 13 00:51:08.632049 systemd[1]: Starting docker.socket... Aug 13 00:51:08.635711 systemd-timesyncd[1130]: Contacted time server 23.186.168.123:123 (0.flatcar.pool.ntp.org). Aug 13 00:51:08.635793 systemd-timesyncd[1130]: Initial clock synchronization to Wed 2025-08-13 00:51:08.416634 UTC. Aug 13 00:51:08.639806 systemd[1]: Listening on sshd.socket. Aug 13 00:51:08.640675 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:51:08.641649 systemd[1]: Listening on docker.socket. Aug 13 00:51:08.642481 systemd[1]: Reached target sockets.target. Aug 13 00:51:08.642949 systemd[1]: Reached target basic.target. Aug 13 00:51:08.643492 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:51:08.643538 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:51:08.645623 systemd[1]: Starting containerd.service... Aug 13 00:51:08.649018 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Aug 13 00:51:08.651388 systemd[1]: Starting dbus.service... Aug 13 00:51:08.653813 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 00:51:08.657037 systemd[1]: Starting extend-filesystems.service... Aug 13 00:51:08.657557 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 00:51:08.662794 systemd[1]: Starting kubelet.service... Aug 13 00:51:08.668151 systemd[1]: Starting motdgen.service... Aug 13 00:51:08.671688 systemd[1]: Starting prepare-helm.service... Aug 13 00:51:08.675751 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 00:51:08.681017 systemd[1]: Starting sshd-keygen.service... Aug 13 00:51:08.687415 systemd[1]: Starting systemd-logind.service... Aug 13 00:51:08.688276 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:51:08.688433 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:51:08.689093 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:51:08.691644 systemd[1]: Starting update-engine.service... Aug 13 00:51:08.705126 jq[1166]: false Aug 13 00:51:08.712088 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 00:51:08.730455 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:51:08.730812 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 00:51:08.734730 jq[1182]: true Aug 13 00:51:08.739750 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:51:08.740015 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 00:51:08.770692 jq[1193]: true Aug 13 00:51:08.780525 tar[1189]: linux-amd64/LICENSE Aug 13 00:51:08.785186 tar[1189]: linux-amd64/helm Aug 13 00:51:08.830681 extend-filesystems[1168]: Found loop1 Aug 13 00:51:08.833925 dbus-daemon[1164]: [system] SELinux support is enabled Aug 13 00:51:08.842721 systemd[1]: Started dbus.service. Aug 13 00:51:08.845503 extend-filesystems[1168]: Found vda Aug 13 00:51:08.846071 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:51:08.846117 systemd[1]: Reached target system-config.target. Aug 13 00:51:08.846678 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:51:08.846705 systemd[1]: Reached target user-config.target. Aug 13 00:51:08.847085 extend-filesystems[1168]: Found vda1 Aug 13 00:51:08.850542 extend-filesystems[1168]: Found vda2 Aug 13 00:51:08.851398 extend-filesystems[1168]: Found vda3 Aug 13 00:51:08.853470 extend-filesystems[1168]: Found usr Aug 13 00:51:08.855713 extend-filesystems[1168]: Found vda4 Aug 13 00:51:08.855713 extend-filesystems[1168]: Found vda6 Aug 13 00:51:08.855713 extend-filesystems[1168]: Found vda7 Aug 13 00:51:08.855713 extend-filesystems[1168]: Found vda9 Aug 13 00:51:08.855713 extend-filesystems[1168]: Checking size of /dev/vda9 Aug 13 00:51:08.868635 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:51:08.873411 bash[1210]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:51:08.868909 systemd[1]: Finished motdgen.service. Aug 13 00:51:08.873690 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 00:51:08.927513 update_engine[1176]: I0813 00:51:08.926906 1176 main.cc:92] Flatcar Update Engine starting Aug 13 00:51:08.933062 systemd[1]: Started update-engine.service. Aug 13 00:51:08.933497 update_engine[1176]: I0813 00:51:08.933454 1176 update_check_scheduler.cc:74] Next update check in 6m48s Aug 13 00:51:08.936351 systemd[1]: Started locksmithd.service. Aug 13 00:51:08.942477 extend-filesystems[1168]: Resized partition /dev/vda9 Aug 13 00:51:08.967638 extend-filesystems[1221]: resize2fs 1.46.5 (30-Dec-2021) Aug 13 00:51:08.979970 systemd-logind[1175]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 00:51:08.980002 systemd-logind[1175]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:51:08.980247 systemd-logind[1175]: New seat seat0. Aug 13 00:51:08.983661 systemd[1]: Started systemd-logind.service. Aug 13 00:51:08.984306 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Aug 13 00:51:09.013465 coreos-metadata[1163]: Aug 13 00:51:09.013 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:51:09.036338 coreos-metadata[1163]: Aug 13 00:51:09.034 INFO Fetch successful Aug 13 00:51:09.064604 env[1192]: time="2025-08-13T00:51:09.064499552Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 00:51:09.065311 unknown[1163]: wrote ssh authorized keys file for user: core Aug 13 00:51:09.099415 update-ssh-keys[1224]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:51:09.100105 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Aug 13 00:51:09.126858 systemd[1]: Created slice system-sshd.slice. Aug 13 00:51:09.139044 env[1192]: time="2025-08-13T00:51:09.138970353Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:51:09.139236 env[1192]: time="2025-08-13T00:51:09.139172127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:51:09.144396 env[1192]: time="2025-08-13T00:51:09.144303828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:51:09.144396 env[1192]: time="2025-08-13T00:51:09.144383354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:51:09.144855 env[1192]: time="2025-08-13T00:51:09.144809629Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:51:09.144931 env[1192]: time="2025-08-13T00:51:09.144854064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:51:09.144931 env[1192]: time="2025-08-13T00:51:09.144875320Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 00:51:09.144931 env[1192]: time="2025-08-13T00:51:09.144891244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:51:09.145198 env[1192]: time="2025-08-13T00:51:09.145026435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:51:09.145472 env[1192]: time="2025-08-13T00:51:09.145441453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:51:09.147702 env[1192]: time="2025-08-13T00:51:09.147642710Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:51:09.147702 env[1192]: time="2025-08-13T00:51:09.147689741Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:51:09.147868 env[1192]: time="2025-08-13T00:51:09.147837313Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 00:51:09.147928 env[1192]: time="2025-08-13T00:51:09.147865368Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:51:09.155306 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 13 00:51:09.182176 extend-filesystems[1221]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 00:51:09.182176 extend-filesystems[1221]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 13 00:51:09.182176 extend-filesystems[1221]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 13 00:51:09.185314 extend-filesystems[1168]: Resized filesystem in /dev/vda9 Aug 13 00:51:09.185314 extend-filesystems[1168]: Found vdb Aug 13 00:51:09.186424 env[1192]: time="2025-08-13T00:51:09.184508179Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:51:09.186424 env[1192]: time="2025-08-13T00:51:09.184560295Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:51:09.186424 env[1192]: time="2025-08-13T00:51:09.184575559Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:51:09.186424 env[1192]: time="2025-08-13T00:51:09.184645109Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:51:09.186424 env[1192]: time="2025-08-13T00:51:09.184666337Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:51:09.186424 env[1192]: time="2025-08-13T00:51:09.184681585Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:51:09.186424 env[1192]: time="2025-08-13T00:51:09.184754568Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:51:09.186424 env[1192]: time="2025-08-13T00:51:09.184774861Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:51:09.186424 env[1192]: time="2025-08-13T00:51:09.184792272Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 00:51:09.186424 env[1192]: time="2025-08-13T00:51:09.184812546Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:51:09.186424 env[1192]: time="2025-08-13T00:51:09.184830969Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:51:09.186424 env[1192]: time="2025-08-13T00:51:09.184854281Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:51:09.186424 env[1192]: time="2025-08-13T00:51:09.185044320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:51:09.186424 env[1192]: time="2025-08-13T00:51:09.185130492Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:51:09.183539 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:51:09.187084 env[1192]: time="2025-08-13T00:51:09.185482419Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:51:09.187084 env[1192]: time="2025-08-13T00:51:09.185528316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:51:09.187084 env[1192]: time="2025-08-13T00:51:09.185544667Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:51:09.187084 env[1192]: time="2025-08-13T00:51:09.185598480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:51:09.187084 env[1192]: time="2025-08-13T00:51:09.185611599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:51:09.187084 env[1192]: time="2025-08-13T00:51:09.185623584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:51:09.187084 env[1192]: time="2025-08-13T00:51:09.185635894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:51:09.187084 env[1192]: time="2025-08-13T00:51:09.185701405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:51:09.187084 env[1192]: time="2025-08-13T00:51:09.185714434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:51:09.187084 env[1192]: time="2025-08-13T00:51:09.185727004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:51:09.187084 env[1192]: time="2025-08-13T00:51:09.185739240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:51:09.187084 env[1192]: time="2025-08-13T00:51:09.185752770Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:51:09.187084 env[1192]: time="2025-08-13T00:51:09.185899262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:51:09.187084 env[1192]: time="2025-08-13T00:51:09.185915189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:51:09.187084 env[1192]: time="2025-08-13T00:51:09.185926980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:51:09.183734 systemd[1]: Finished extend-filesystems.service. Aug 13 00:51:09.187873 env[1192]: time="2025-08-13T00:51:09.185938623Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:51:09.187873 env[1192]: time="2025-08-13T00:51:09.185954955Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 00:51:09.187873 env[1192]: time="2025-08-13T00:51:09.185967454Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:51:09.187873 env[1192]: time="2025-08-13T00:51:09.185984882Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 00:51:09.187873 env[1192]: time="2025-08-13T00:51:09.186021437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:51:09.187985 systemd[1]: Started containerd.service. Aug 13 00:51:09.188143 env[1192]: time="2025-08-13T00:51:09.186207601Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:51:09.188143 env[1192]: time="2025-08-13T00:51:09.186268903Z" level=info msg="Connect containerd service" Aug 13 00:51:09.188143 env[1192]: time="2025-08-13T00:51:09.186325808Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:51:09.188143 env[1192]: time="2025-08-13T00:51:09.187150771Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:51:09.188143 env[1192]: time="2025-08-13T00:51:09.187684227Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:51:09.188143 env[1192]: time="2025-08-13T00:51:09.187756277Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:51:09.192196 env[1192]: time="2025-08-13T00:51:09.192134465Z" level=info msg="Start subscribing containerd event" Aug 13 00:51:09.192355 env[1192]: time="2025-08-13T00:51:09.192208288Z" level=info msg="Start recovering state" Aug 13 00:51:09.192412 env[1192]: time="2025-08-13T00:51:09.192349233Z" level=info msg="Start event monitor" Aug 13 00:51:09.192453 env[1192]: time="2025-08-13T00:51:09.192413991Z" level=info msg="Start snapshots syncer" Aug 13 00:51:09.192453 env[1192]: time="2025-08-13T00:51:09.192436321Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:51:09.192453 env[1192]: time="2025-08-13T00:51:09.192450046Z" level=info msg="Start streaming server" Aug 13 00:51:09.218565 env[1192]: time="2025-08-13T00:51:09.218510088Z" level=info msg="containerd successfully booted in 0.156855s" Aug 13 00:51:09.940937 locksmithd[1219]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:51:10.343348 tar[1189]: linux-amd64/README.md Aug 13 00:51:10.350204 systemd[1]: Finished prepare-helm.service. Aug 13 00:51:10.654991 sshd_keygen[1195]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:51:10.709031 systemd[1]: Finished sshd-keygen.service. Aug 13 00:51:10.712537 systemd[1]: Starting issuegen.service... Aug 13 00:51:10.715021 systemd[1]: Started sshd@0-143.198.60.143:22-139.178.68.195:44180.service. Aug 13 00:51:10.730223 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:51:10.730553 systemd[1]: Finished issuegen.service. Aug 13 00:51:10.733521 systemd[1]: Starting systemd-user-sessions.service... Aug 13 00:51:10.754866 systemd[1]: Finished systemd-user-sessions.service. Aug 13 00:51:10.758028 systemd[1]: Started getty@tty1.service. Aug 13 00:51:10.760863 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 00:51:10.762841 systemd[1]: Reached target getty.target. Aug 13 00:51:10.766171 systemd[1]: Started kubelet.service. Aug 13 00:51:10.771239 systemd[1]: Reached target multi-user.target. Aug 13 00:51:10.775834 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 00:51:10.791700 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 00:51:10.791939 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 00:51:10.792536 systemd[1]: Startup finished in 1.018s (kernel) + 5.762s (initrd) + 9.241s (userspace) = 16.023s. Aug 13 00:51:10.845176 sshd[1244]: Accepted publickey for core from 139.178.68.195 port 44180 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:51:10.848774 sshd[1244]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:51:10.869220 systemd[1]: Created slice user-500.slice. Aug 13 00:51:10.873523 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 00:51:10.881320 systemd-logind[1175]: New session 1 of user core. Aug 13 00:51:10.894541 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 00:51:10.898041 systemd[1]: Starting user@500.service... Aug 13 00:51:10.905400 (systemd)[1257]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:51:11.010471 systemd[1257]: Queued start job for default target default.target. Aug 13 00:51:11.011521 systemd[1257]: Reached target paths.target. Aug 13 00:51:11.011729 systemd[1257]: Reached target sockets.target. Aug 13 00:51:11.011817 systemd[1257]: Reached target timers.target. Aug 13 00:51:11.012007 systemd[1257]: Reached target basic.target. Aug 13 00:51:11.012227 systemd[1257]: Reached target default.target. Aug 13 00:51:11.012404 systemd[1257]: Startup finished in 95ms. Aug 13 00:51:11.013329 systemd[1]: Started user@500.service. Aug 13 00:51:11.015197 systemd[1]: Started session-1.scope. Aug 13 00:51:11.086517 systemd[1]: Started sshd@1-143.198.60.143:22-139.178.68.195:52358.service. Aug 13 00:51:11.178724 sshd[1271]: Accepted publickey for core from 139.178.68.195 port 52358 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:51:11.182845 sshd[1271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:51:11.189976 systemd[1]: Started session-2.scope. Aug 13 00:51:11.191458 systemd-logind[1175]: New session 2 of user core. Aug 13 00:51:11.260603 sshd[1271]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:11.268847 systemd[1]: Started sshd@2-143.198.60.143:22-139.178.68.195:52364.service. Aug 13 00:51:11.269680 systemd[1]: sshd@1-143.198.60.143:22-139.178.68.195:52358.service: Deactivated successfully. Aug 13 00:51:11.270625 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:51:11.276636 systemd-logind[1175]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:51:11.280526 systemd-logind[1175]: Removed session 2. Aug 13 00:51:11.321344 sshd[1276]: Accepted publickey for core from 139.178.68.195 port 52364 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:51:11.324581 sshd[1276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:51:11.332893 systemd[1]: Started session-3.scope. Aug 13 00:51:11.333884 systemd-logind[1175]: New session 3 of user core. Aug 13 00:51:11.413957 sshd[1276]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:11.420950 systemd[1]: Started sshd@3-143.198.60.143:22-139.178.68.195:52370.service. Aug 13 00:51:11.422674 systemd[1]: sshd@2-143.198.60.143:22-139.178.68.195:52364.service: Deactivated successfully. Aug 13 00:51:11.424167 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:51:11.426933 systemd-logind[1175]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:51:11.428960 systemd-logind[1175]: Removed session 3. Aug 13 00:51:11.485329 sshd[1282]: Accepted publickey for core from 139.178.68.195 port 52370 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:51:11.488408 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:51:11.497435 systemd[1]: Started session-4.scope. Aug 13 00:51:11.498734 systemd-logind[1175]: New session 4 of user core. Aug 13 00:51:11.574045 sshd[1282]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:11.582750 systemd[1]: Started sshd@4-143.198.60.143:22-139.178.68.195:52378.service. Aug 13 00:51:11.586369 systemd[1]: sshd@3-143.198.60.143:22-139.178.68.195:52370.service: Deactivated successfully. Aug 13 00:51:11.587698 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:51:11.590119 systemd-logind[1175]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:51:11.591901 systemd-logind[1175]: Removed session 4. Aug 13 00:51:11.640979 sshd[1288]: Accepted publickey for core from 139.178.68.195 port 52378 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:51:11.643834 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:51:11.652686 systemd[1]: Started session-5.scope. Aug 13 00:51:11.653338 systemd-logind[1175]: New session 5 of user core. Aug 13 00:51:11.735739 kubelet[1254]: E0813 00:51:11.735580 1254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:51:11.739260 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:51:11.739696 sudo[1292]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:51:11.739503 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:51:11.740008 sudo[1292]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:51:11.739879 systemd[1]: kubelet.service: Consumed 1.526s CPU time. Aug 13 00:51:11.781951 systemd[1]: Starting docker.service... Aug 13 00:51:11.871558 env[1302]: time="2025-08-13T00:51:11.871479327Z" level=info msg="Starting up" Aug 13 00:51:11.874985 env[1302]: time="2025-08-13T00:51:11.874258951Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:51:11.874985 env[1302]: time="2025-08-13T00:51:11.874779224Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:51:11.874985 env[1302]: time="2025-08-13T00:51:11.874817017Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:51:11.874985 env[1302]: time="2025-08-13T00:51:11.874838434Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:51:11.877883 env[1302]: time="2025-08-13T00:51:11.877813629Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:51:11.877883 env[1302]: time="2025-08-13T00:51:11.877851714Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:51:11.877883 env[1302]: time="2025-08-13T00:51:11.877878344Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:51:11.877883 env[1302]: time="2025-08-13T00:51:11.877894298Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:51:11.890519 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4118973588-merged.mount: Deactivated successfully. Aug 13 00:51:11.940088 env[1302]: time="2025-08-13T00:51:11.939313814Z" level=info msg="Loading containers: start." Aug 13 00:51:12.132352 kernel: Initializing XFRM netlink socket Aug 13 00:51:12.180934 env[1302]: time="2025-08-13T00:51:12.180874233Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 00:51:12.280687 systemd-networkd[1002]: docker0: Link UP Aug 13 00:51:12.301643 env[1302]: time="2025-08-13T00:51:12.301492415Z" level=info msg="Loading containers: done." Aug 13 00:51:12.327455 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3104143904-merged.mount: Deactivated successfully. Aug 13 00:51:12.331541 env[1302]: time="2025-08-13T00:51:12.331471042Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:51:12.332183 env[1302]: time="2025-08-13T00:51:12.332135946Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 00:51:12.332672 env[1302]: time="2025-08-13T00:51:12.332636071Z" level=info msg="Daemon has completed initialization" Aug 13 00:51:12.353998 systemd[1]: Started docker.service. Aug 13 00:51:12.367668 env[1302]: time="2025-08-13T00:51:12.367569193Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:51:12.402744 systemd[1]: Starting coreos-metadata.service... Aug 13 00:51:12.453760 coreos-metadata[1419]: Aug 13 00:51:12.453 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:51:12.466477 coreos-metadata[1419]: Aug 13 00:51:12.466 INFO Fetch successful Aug 13 00:51:12.481823 systemd[1]: Finished coreos-metadata.service. Aug 13 00:51:13.502431 env[1192]: time="2025-08-13T00:51:13.502343026Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 13 00:51:14.093993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount19189830.mount: Deactivated successfully. Aug 13 00:51:15.834423 env[1192]: time="2025-08-13T00:51:15.834327017Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:15.836361 env[1192]: time="2025-08-13T00:51:15.836298374Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:15.838980 env[1192]: time="2025-08-13T00:51:15.838908365Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:15.841916 env[1192]: time="2025-08-13T00:51:15.841838940Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:15.843324 env[1192]: time="2025-08-13T00:51:15.843235664Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\"" Aug 13 00:51:15.845396 env[1192]: time="2025-08-13T00:51:15.845345122Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 13 00:51:17.943028 env[1192]: time="2025-08-13T00:51:17.942956206Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:17.945006 env[1192]: time="2025-08-13T00:51:17.944946572Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:17.947045 env[1192]: time="2025-08-13T00:51:17.946998740Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:17.949189 env[1192]: time="2025-08-13T00:51:17.949138579Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:17.950545 env[1192]: time="2025-08-13T00:51:17.950493566Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\"" Aug 13 00:51:17.951427 env[1192]: time="2025-08-13T00:51:17.951382986Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 13 00:51:19.665770 env[1192]: time="2025-08-13T00:51:19.665684348Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:19.668553 env[1192]: time="2025-08-13T00:51:19.668368567Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:19.672190 env[1192]: time="2025-08-13T00:51:19.672119603Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:19.674968 env[1192]: time="2025-08-13T00:51:19.674893539Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:19.677088 env[1192]: time="2025-08-13T00:51:19.677010836Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\"" Aug 13 00:51:19.678506 env[1192]: time="2025-08-13T00:51:19.678453003Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 00:51:21.107488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount710682406.mount: Deactivated successfully. Aug 13 00:51:21.990896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:51:21.991174 systemd[1]: Stopped kubelet.service. Aug 13 00:51:21.991245 systemd[1]: kubelet.service: Consumed 1.526s CPU time. Aug 13 00:51:21.993386 systemd[1]: Starting kubelet.service... Aug 13 00:51:22.025517 env[1192]: time="2025-08-13T00:51:22.025440220Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:22.031072 env[1192]: time="2025-08-13T00:51:22.031007654Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:22.033479 env[1192]: time="2025-08-13T00:51:22.033420522Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:22.036018 env[1192]: time="2025-08-13T00:51:22.035958778Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:22.037723 env[1192]: time="2025-08-13T00:51:22.036919933Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Aug 13 00:51:22.038921 env[1192]: time="2025-08-13T00:51:22.038828800Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 00:51:22.156289 systemd[1]: Started kubelet.service. Aug 13 00:51:22.234179 kubelet[1441]: E0813 00:51:22.234108 1441 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:51:22.238980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:51:22.239181 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:51:22.573195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount886139524.mount: Deactivated successfully. Aug 13 00:51:23.984737 env[1192]: time="2025-08-13T00:51:23.984651646Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:23.989237 env[1192]: time="2025-08-13T00:51:23.989016138Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:23.992583 env[1192]: time="2025-08-13T00:51:23.992504445Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:23.998076 env[1192]: time="2025-08-13T00:51:23.997936789Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 00:51:23.998538 env[1192]: time="2025-08-13T00:51:23.998484992Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:23.999096 env[1192]: time="2025-08-13T00:51:23.999041604Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:51:24.490013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3628176942.mount: Deactivated successfully. Aug 13 00:51:24.496444 env[1192]: time="2025-08-13T00:51:24.496349380Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:24.499108 env[1192]: time="2025-08-13T00:51:24.499036137Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:24.501340 env[1192]: time="2025-08-13T00:51:24.501258285Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:24.503513 env[1192]: time="2025-08-13T00:51:24.503452884Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:24.504840 env[1192]: time="2025-08-13T00:51:24.504776588Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:51:24.505855 env[1192]: time="2025-08-13T00:51:24.505808074Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 00:51:25.008990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount327219477.mount: Deactivated successfully. Aug 13 00:51:27.679122 env[1192]: time="2025-08-13T00:51:27.679011585Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:27.681256 env[1192]: time="2025-08-13T00:51:27.681195636Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:27.684111 env[1192]: time="2025-08-13T00:51:27.684056404Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:27.687384 env[1192]: time="2025-08-13T00:51:27.687333449Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:27.688447 env[1192]: time="2025-08-13T00:51:27.688402258Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 13 00:51:32.143760 systemd[1]: Stopped kubelet.service. Aug 13 00:51:32.146831 systemd[1]: Starting kubelet.service... Aug 13 00:51:32.193470 systemd[1]: Reloading. Aug 13 00:51:32.327032 /usr/lib/systemd/system-generators/torcx-generator[1492]: time="2025-08-13T00:51:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:51:32.338395 /usr/lib/systemd/system-generators/torcx-generator[1492]: time="2025-08-13T00:51:32Z" level=info msg="torcx already run" Aug 13 00:51:32.485840 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:51:32.485861 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:51:32.516871 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:51:32.646776 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:51:32.646879 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:51:32.647185 systemd[1]: Stopped kubelet.service. Aug 13 00:51:32.649792 systemd[1]: Starting kubelet.service... Aug 13 00:51:32.814257 systemd[1]: Started kubelet.service. Aug 13 00:51:32.903021 kubelet[1544]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:51:32.903021 kubelet[1544]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:51:32.903021 kubelet[1544]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:51:32.903629 kubelet[1544]: I0813 00:51:32.903163 1544 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:51:34.696413 kubelet[1544]: I0813 00:51:34.696333 1544 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:51:34.696944 kubelet[1544]: I0813 00:51:34.696920 1544 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:51:34.697350 kubelet[1544]: I0813 00:51:34.697331 1544 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:51:34.738019 kubelet[1544]: E0813 00:51:34.737963 1544 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://143.198.60.143:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.60.143:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 00:51:34.745571 kubelet[1544]: I0813 00:51:34.745521 1544 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:51:34.760067 kubelet[1544]: E0813 00:51:34.760011 1544 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:51:34.760322 kubelet[1544]: I0813 00:51:34.760302 1544 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:51:34.765344 kubelet[1544]: I0813 00:51:34.765304 1544 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:51:34.765955 kubelet[1544]: I0813 00:51:34.765911 1544 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:51:34.766452 kubelet[1544]: I0813 00:51:34.766115 1544 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-a-e4f4484119","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:51:34.766662 kubelet[1544]: I0813 00:51:34.766645 1544 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:51:34.766761 kubelet[1544]: I0813 00:51:34.766748 1544 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:51:34.766976 kubelet[1544]: I0813 00:51:34.766961 1544 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:51:34.770011 kubelet[1544]: I0813 00:51:34.769932 1544 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:51:34.770463 kubelet[1544]: I0813 00:51:34.770420 1544 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:51:34.770734 kubelet[1544]: I0813 00:51:34.770701 1544 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:51:34.777163 kubelet[1544]: E0813 00:51:34.777113 1544 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.198.60.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-e4f4484119&limit=500&resourceVersion=0\": dial tcp 143.198.60.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:51:34.781426 kubelet[1544]: I0813 00:51:34.781368 1544 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:51:34.798817 kubelet[1544]: E0813 00:51:34.798774 1544 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.198.60.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.60.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:51:34.799180 kubelet[1544]: I0813 00:51:34.799152 1544 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:51:34.799895 kubelet[1544]: I0813 00:51:34.799873 1544 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:51:34.804777 kubelet[1544]: W0813 00:51:34.804731 1544 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:51:34.811678 kubelet[1544]: I0813 00:51:34.811631 1544 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:51:34.811991 kubelet[1544]: I0813 00:51:34.811975 1544 server.go:1289] "Started kubelet" Aug 13 00:51:34.816961 kubelet[1544]: I0813 00:51:34.816589 1544 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:51:34.817328 kubelet[1544]: I0813 00:51:34.817300 1544 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:51:34.817513 kubelet[1544]: I0813 00:51:34.817280 1544 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:51:34.819006 kubelet[1544]: I0813 00:51:34.818969 1544 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:51:34.824133 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Aug 13 00:51:34.824336 kubelet[1544]: I0813 00:51:34.821921 1544 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:51:34.825095 kubelet[1544]: I0813 00:51:34.822089 1544 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:51:34.826356 kubelet[1544]: I0813 00:51:34.826323 1544 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:51:34.827170 kubelet[1544]: E0813 00:51:34.827109 1544 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-e4f4484119\" not found" Aug 13 00:51:34.828517 kubelet[1544]: I0813 00:51:34.828488 1544 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:51:34.828805 kubelet[1544]: I0813 00:51:34.828790 1544 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:51:34.831634 kubelet[1544]: E0813 00:51:34.829869 1544 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.60.143:6443/api/v1/namespaces/default/events\": dial tcp 143.198.60.143:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-a-e4f4484119.185b2d4e76a13077 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-a-e4f4484119,UID:ci-3510.3.8-a-e4f4484119,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-a-e4f4484119,},FirstTimestamp:2025-08-13 00:51:34.811914359 +0000 UTC m=+1.986346064,LastTimestamp:2025-08-13 00:51:34.811914359 +0000 UTC m=+1.986346064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-a-e4f4484119,}" Aug 13 00:51:34.832169 kubelet[1544]: I0813 00:51:34.832143 1544 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:51:34.833016 kubelet[1544]: I0813 00:51:34.832989 1544 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:51:34.833333 kubelet[1544]: E0813 00:51:34.832480 1544 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.60.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-e4f4484119?timeout=10s\": dial tcp 143.198.60.143:6443: connect: connection refused" interval="200ms" Aug 13 00:51:34.834037 kubelet[1544]: E0813 00:51:34.833992 1544 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.198.60.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.60.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:51:34.834911 kubelet[1544]: I0813 00:51:34.834881 1544 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:51:34.838474 kubelet[1544]: E0813 00:51:34.838408 1544 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:51:34.855756 kubelet[1544]: I0813 00:51:34.855719 1544 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:51:34.855756 kubelet[1544]: I0813 00:51:34.855740 1544 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:51:34.855756 kubelet[1544]: I0813 00:51:34.855765 1544 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:51:34.857977 kubelet[1544]: I0813 00:51:34.857915 1544 policy_none.go:49] "None policy: Start" Aug 13 00:51:34.857977 kubelet[1544]: I0813 00:51:34.857951 1544 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:51:34.857977 kubelet[1544]: I0813 00:51:34.857969 1544 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:51:34.864225 systemd[1]: Created slice kubepods.slice. Aug 13 00:51:34.876333 systemd[1]: Created slice kubepods-burstable.slice. Aug 13 00:51:34.880740 systemd[1]: Created slice kubepods-besteffort.slice. Aug 13 00:51:34.887902 kubelet[1544]: E0813 00:51:34.887827 1544 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:51:34.888116 kubelet[1544]: I0813 00:51:34.888091 1544 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:51:34.888163 kubelet[1544]: I0813 00:51:34.888121 1544 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:51:34.889196 kubelet[1544]: I0813 00:51:34.889101 1544 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:51:34.891652 kubelet[1544]: E0813 00:51:34.891550 1544 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:51:34.891652 kubelet[1544]: E0813 00:51:34.891621 1544 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-a-e4f4484119\" not found" Aug 13 00:51:34.905231 kubelet[1544]: I0813 00:51:34.905164 1544 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:51:34.908793 kubelet[1544]: I0813 00:51:34.908751 1544 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:51:34.909028 kubelet[1544]: I0813 00:51:34.909009 1544 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:51:34.909173 kubelet[1544]: I0813 00:51:34.909157 1544 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:51:34.909261 kubelet[1544]: I0813 00:51:34.909250 1544 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:51:34.909497 kubelet[1544]: E0813 00:51:34.909474 1544 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Aug 13 00:51:34.910575 kubelet[1544]: E0813 00:51:34.910535 1544 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.198.60.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.60.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 00:51:34.990873 kubelet[1544]: I0813 00:51:34.990698 1544 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:34.994449 kubelet[1544]: E0813 00:51:34.994398 1544 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.60.143:6443/api/v1/nodes\": dial tcp 143.198.60.143:6443: connect: connection refused" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:35.021815 systemd[1]: Created slice kubepods-burstable-poddbaeea74e777a7b7976bfa0196643f0d.slice. Aug 13 00:51:35.032242 kubelet[1544]: E0813 00:51:35.031940 1544 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-e4f4484119\" not found" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:35.035628 kubelet[1544]: E0813 00:51:35.035586 1544 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.60.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-e4f4484119?timeout=10s\": dial tcp 143.198.60.143:6443: connect: connection refused" interval="400ms" Aug 13 00:51:35.039922 systemd[1]: Created slice kubepods-burstable-pod58a86ba44b38df75aaa678c7fc389126.slice. Aug 13 00:51:35.049201 kubelet[1544]: E0813 00:51:35.049135 1544 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-e4f4484119\" not found" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:35.053513 systemd[1]: Created slice kubepods-burstable-pod3d0312ed039e610470ca3a1361f5ce37.slice. Aug 13 00:51:35.057536 kubelet[1544]: E0813 00:51:35.057484 1544 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-e4f4484119\" not found" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:35.130132 kubelet[1544]: I0813 00:51:35.129976 1544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dbaeea74e777a7b7976bfa0196643f0d-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-a-e4f4484119\" (UID: \"dbaeea74e777a7b7976bfa0196643f0d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:35.130506 kubelet[1544]: I0813 00:51:35.130462 1544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dbaeea74e777a7b7976bfa0196643f0d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-a-e4f4484119\" (UID: \"dbaeea74e777a7b7976bfa0196643f0d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:35.130731 kubelet[1544]: I0813 00:51:35.130711 1544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d0312ed039e610470ca3a1361f5ce37-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-e4f4484119\" (UID: \"3d0312ed039e610470ca3a1361f5ce37\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:35.131010 kubelet[1544]: I0813 00:51:35.130931 1544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3d0312ed039e610470ca3a1361f5ce37-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-a-e4f4484119\" (UID: \"3d0312ed039e610470ca3a1361f5ce37\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:35.131152 kubelet[1544]: I0813 00:51:35.131126 1544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d0312ed039e610470ca3a1361f5ce37-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-e4f4484119\" (UID: \"3d0312ed039e610470ca3a1361f5ce37\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:35.131383 kubelet[1544]: I0813 00:51:35.131346 1544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d0312ed039e610470ca3a1361f5ce37-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-a-e4f4484119\" (UID: \"3d0312ed039e610470ca3a1361f5ce37\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:35.131605 kubelet[1544]: I0813 00:51:35.131586 1544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58a86ba44b38df75aaa678c7fc389126-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-a-e4f4484119\" (UID: \"58a86ba44b38df75aaa678c7fc389126\") " pod="kube-system/kube-scheduler-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:35.131741 kubelet[1544]: I0813 00:51:35.131725 1544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dbaeea74e777a7b7976bfa0196643f0d-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-a-e4f4484119\" (UID: \"dbaeea74e777a7b7976bfa0196643f0d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:35.131867 kubelet[1544]: I0813 00:51:35.131853 1544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d0312ed039e610470ca3a1361f5ce37-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-a-e4f4484119\" (UID: \"3d0312ed039e610470ca3a1361f5ce37\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:35.196441 kubelet[1544]: I0813 00:51:35.196396 1544 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:35.197419 kubelet[1544]: E0813 00:51:35.197373 1544 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.60.143:6443/api/v1/nodes\": dial tcp 143.198.60.143:6443: connect: connection refused" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:35.332587 kubelet[1544]: E0813 00:51:35.332464 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:35.334124 env[1192]: time="2025-08-13T00:51:35.333637186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-a-e4f4484119,Uid:dbaeea74e777a7b7976bfa0196643f0d,Namespace:kube-system,Attempt:0,}" Aug 13 00:51:35.350787 kubelet[1544]: E0813 00:51:35.350743 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:35.351653 env[1192]: time="2025-08-13T00:51:35.351595935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-a-e4f4484119,Uid:58a86ba44b38df75aaa678c7fc389126,Namespace:kube-system,Attempt:0,}" Aug 13 00:51:35.358592 kubelet[1544]: E0813 00:51:35.358551 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:35.359315 env[1192]: time="2025-08-13T00:51:35.359256185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-a-e4f4484119,Uid:3d0312ed039e610470ca3a1361f5ce37,Namespace:kube-system,Attempt:0,}" Aug 13 00:51:35.456617 kubelet[1544]: E0813 00:51:35.456548 1544 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.60.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-e4f4484119?timeout=10s\": dial tcp 143.198.60.143:6443: connect: connection refused" interval="800ms" Aug 13 00:51:35.599820 kubelet[1544]: I0813 00:51:35.599670 1544 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:35.601086 kubelet[1544]: E0813 00:51:35.601004 1544 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.60.143:6443/api/v1/nodes\": dial tcp 143.198.60.143:6443: connect: connection refused" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:35.756808 kubelet[1544]: E0813 00:51:35.756535 1544 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.198.60.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.60.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:51:35.843590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount138706920.mount: Deactivated successfully. Aug 13 00:51:35.850356 env[1192]: time="2025-08-13T00:51:35.850104936Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:35.853563 env[1192]: time="2025-08-13T00:51:35.853497298Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:35.854844 env[1192]: time="2025-08-13T00:51:35.854787023Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:35.857215 env[1192]: time="2025-08-13T00:51:35.857154770Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:35.861110 env[1192]: time="2025-08-13T00:51:35.861060078Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:35.865468 env[1192]: time="2025-08-13T00:51:35.865248929Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:35.867081 env[1192]: time="2025-08-13T00:51:35.867020479Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:35.868748 env[1192]: time="2025-08-13T00:51:35.868699960Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:35.870045 env[1192]: time="2025-08-13T00:51:35.869987633Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:35.871560 env[1192]: time="2025-08-13T00:51:35.871511123Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:35.872675 env[1192]: time="2025-08-13T00:51:35.872638241Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:35.873958 env[1192]: time="2025-08-13T00:51:35.873907646Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:35.913721 env[1192]: time="2025-08-13T00:51:35.913618626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:51:35.915437 env[1192]: time="2025-08-13T00:51:35.915341587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:51:35.915437 env[1192]: time="2025-08-13T00:51:35.915372782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:51:35.916087 env[1192]: time="2025-08-13T00:51:35.915948738Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/06de5fb96e7f8b7b2ff513ed2c530e90ef27f25eca9c5e1f50b6f77c9ca28450 pid=1587 runtime=io.containerd.runc.v2 Aug 13 00:51:35.922360 env[1192]: time="2025-08-13T00:51:35.922224462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:51:35.922569 env[1192]: time="2025-08-13T00:51:35.922390310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:51:35.922569 env[1192]: time="2025-08-13T00:51:35.922447789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:51:35.922953 env[1192]: time="2025-08-13T00:51:35.922887481Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c79cfb065fee7da9362f40ba06a62845c03a958cd71a787ea53a8677a191f2c9 pid=1608 runtime=io.containerd.runc.v2 Aug 13 00:51:35.940413 env[1192]: time="2025-08-13T00:51:35.940248560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:51:35.940413 env[1192]: time="2025-08-13T00:51:35.940340500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:51:35.940413 env[1192]: time="2025-08-13T00:51:35.940356004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:51:35.941960 env[1192]: time="2025-08-13T00:51:35.941833481Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a28a3446efd304df5e6504b70912f72918c05a93ca75a29ee7c46f94b889f31 pid=1622 runtime=io.containerd.runc.v2 Aug 13 00:51:35.962315 systemd[1]: Started cri-containerd-06de5fb96e7f8b7b2ff513ed2c530e90ef27f25eca9c5e1f50b6f77c9ca28450.scope. Aug 13 00:51:35.991015 systemd[1]: Started cri-containerd-c79cfb065fee7da9362f40ba06a62845c03a958cd71a787ea53a8677a191f2c9.scope. Aug 13 00:51:36.016461 systemd[1]: Started cri-containerd-8a28a3446efd304df5e6504b70912f72918c05a93ca75a29ee7c46f94b889f31.scope. Aug 13 00:51:36.109266 env[1192]: time="2025-08-13T00:51:36.109078208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-a-e4f4484119,Uid:58a86ba44b38df75aaa678c7fc389126,Namespace:kube-system,Attempt:0,} returns sandbox id \"06de5fb96e7f8b7b2ff513ed2c530e90ef27f25eca9c5e1f50b6f77c9ca28450\"" Aug 13 00:51:36.115911 kubelet[1544]: E0813 00:51:36.114513 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:36.122985 kubelet[1544]: E0813 00:51:36.122912 1544 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.198.60.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-e4f4484119&limit=500&resourceVersion=0\": dial tcp 143.198.60.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:51:36.123378 env[1192]: time="2025-08-13T00:51:36.123319192Z" level=info msg="CreateContainer within sandbox \"06de5fb96e7f8b7b2ff513ed2c530e90ef27f25eca9c5e1f50b6f77c9ca28450\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:51:36.136420 env[1192]: time="2025-08-13T00:51:36.136336668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-a-e4f4484119,Uid:3d0312ed039e610470ca3a1361f5ce37,Namespace:kube-system,Attempt:0,} returns sandbox id \"c79cfb065fee7da9362f40ba06a62845c03a958cd71a787ea53a8677a191f2c9\"" Aug 13 00:51:36.138180 kubelet[1544]: E0813 00:51:36.137904 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:36.142656 env[1192]: time="2025-08-13T00:51:36.142574824Z" level=info msg="CreateContainer within sandbox \"c79cfb065fee7da9362f40ba06a62845c03a958cd71a787ea53a8677a191f2c9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:51:36.144867 env[1192]: time="2025-08-13T00:51:36.144788336Z" level=info msg="CreateContainer within sandbox \"06de5fb96e7f8b7b2ff513ed2c530e90ef27f25eca9c5e1f50b6f77c9ca28450\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"252328723f00b15ec2d06c241cda0a1434943c0434110053a55824df41fa0f31\"" Aug 13 00:51:36.146190 env[1192]: time="2025-08-13T00:51:36.146127993Z" level=info msg="StartContainer for \"252328723f00b15ec2d06c241cda0a1434943c0434110053a55824df41fa0f31\"" Aug 13 00:51:36.161622 env[1192]: time="2025-08-13T00:51:36.161531603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-a-e4f4484119,Uid:dbaeea74e777a7b7976bfa0196643f0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a28a3446efd304df5e6504b70912f72918c05a93ca75a29ee7c46f94b889f31\"" Aug 13 00:51:36.167046 kubelet[1544]: E0813 00:51:36.166749 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:36.170446 env[1192]: time="2025-08-13T00:51:36.170377371Z" level=info msg="CreateContainer within sandbox \"c79cfb065fee7da9362f40ba06a62845c03a958cd71a787ea53a8677a191f2c9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ad2c7d4fe15703fdb6791351b859edc5eec6d0e95e1311a603998029b431c1ad\"" Aug 13 00:51:36.171510 env[1192]: time="2025-08-13T00:51:36.171461408Z" level=info msg="StartContainer for \"ad2c7d4fe15703fdb6791351b859edc5eec6d0e95e1311a603998029b431c1ad\"" Aug 13 00:51:36.174307 env[1192]: time="2025-08-13T00:51:36.174223533Z" level=info msg="CreateContainer within sandbox \"8a28a3446efd304df5e6504b70912f72918c05a93ca75a29ee7c46f94b889f31\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:51:36.175741 kubelet[1544]: E0813 00:51:36.175670 1544 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.198.60.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.60.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 00:51:36.186283 systemd[1]: Started cri-containerd-252328723f00b15ec2d06c241cda0a1434943c0434110053a55824df41fa0f31.scope. Aug 13 00:51:36.202326 env[1192]: time="2025-08-13T00:51:36.202235984Z" level=info msg="CreateContainer within sandbox \"8a28a3446efd304df5e6504b70912f72918c05a93ca75a29ee7c46f94b889f31\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"471281d6a27c01af683a4c1fc0c11b48e9afe7f7dbbc5a7a75db4026269e936c\"" Aug 13 00:51:36.203522 env[1192]: time="2025-08-13T00:51:36.203469131Z" level=info msg="StartContainer for \"471281d6a27c01af683a4c1fc0c11b48e9afe7f7dbbc5a7a75db4026269e936c\"" Aug 13 00:51:36.234562 kubelet[1544]: E0813 00:51:36.234459 1544 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.198.60.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.60.143:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:51:36.252331 systemd[1]: Started cri-containerd-ad2c7d4fe15703fdb6791351b859edc5eec6d0e95e1311a603998029b431c1ad.scope. Aug 13 00:51:36.257725 kubelet[1544]: E0813 00:51:36.257636 1544 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.60.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-e4f4484119?timeout=10s\": dial tcp 143.198.60.143:6443: connect: connection refused" interval="1.6s" Aug 13 00:51:36.262234 systemd[1]: Started cri-containerd-471281d6a27c01af683a4c1fc0c11b48e9afe7f7dbbc5a7a75db4026269e936c.scope. Aug 13 00:51:36.310017 env[1192]: time="2025-08-13T00:51:36.309943891Z" level=info msg="StartContainer for \"252328723f00b15ec2d06c241cda0a1434943c0434110053a55824df41fa0f31\" returns successfully" Aug 13 00:51:36.404542 env[1192]: time="2025-08-13T00:51:36.404367332Z" level=info msg="StartContainer for \"471281d6a27c01af683a4c1fc0c11b48e9afe7f7dbbc5a7a75db4026269e936c\" returns successfully" Aug 13 00:51:36.406583 kubelet[1544]: I0813 00:51:36.406416 1544 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:36.407177 kubelet[1544]: E0813 00:51:36.407114 1544 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.60.143:6443/api/v1/nodes\": dial tcp 143.198.60.143:6443: connect: connection refused" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:36.437584 env[1192]: time="2025-08-13T00:51:36.437482839Z" level=info msg="StartContainer for \"ad2c7d4fe15703fdb6791351b859edc5eec6d0e95e1311a603998029b431c1ad\" returns successfully" Aug 13 00:51:36.749960 kubelet[1544]: E0813 00:51:36.749886 1544 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://143.198.60.143:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.60.143:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 00:51:36.919535 kubelet[1544]: E0813 00:51:36.919481 1544 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-e4f4484119\" not found" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:36.920134 kubelet[1544]: E0813 00:51:36.919764 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:36.925225 kubelet[1544]: E0813 00:51:36.925169 1544 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-e4f4484119\" not found" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:36.925749 kubelet[1544]: E0813 00:51:36.925719 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:36.931746 kubelet[1544]: E0813 00:51:36.931697 1544 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-e4f4484119\" not found" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:36.932317 kubelet[1544]: E0813 00:51:36.932258 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:37.934509 kubelet[1544]: E0813 00:51:37.934456 1544 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-e4f4484119\" not found" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:37.935198 kubelet[1544]: E0813 00:51:37.934700 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:37.935563 kubelet[1544]: E0813 00:51:37.935526 1544 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-e4f4484119\" not found" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:37.935895 kubelet[1544]: E0813 00:51:37.935857 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:38.009088 kubelet[1544]: I0813 00:51:38.009049 1544 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:38.936141 kubelet[1544]: E0813 00:51:38.936090 1544 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-e4f4484119\" not found" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:38.936714 kubelet[1544]: E0813 00:51:38.936329 1544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:40.038831 kubelet[1544]: E0813 00:51:40.038778 1544 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-a-e4f4484119\" not found" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:40.063575 kubelet[1544]: I0813 00:51:40.063500 1544 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:40.063575 kubelet[1544]: E0813 00:51:40.063578 1544 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-a-e4f4484119\": node \"ci-3510.3.8-a-e4f4484119\" not found" Aug 13 00:51:40.112739 kubelet[1544]: E0813 00:51:40.112583 1544 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.8-a-e4f4484119.185b2d4e76a13077 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-a-e4f4484119,UID:ci-3510.3.8-a-e4f4484119,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-a-e4f4484119,},FirstTimestamp:2025-08-13 00:51:34.811914359 +0000 UTC m=+1.986346064,LastTimestamp:2025-08-13 00:51:34.811914359 +0000 UTC m=+1.986346064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-a-e4f4484119,}" Aug 13 00:51:40.128856 kubelet[1544]: I0813 00:51:40.128797 1544 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:40.159499 kubelet[1544]: E0813 00:51:40.159412 1544 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-a-e4f4484119\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:40.159499 kubelet[1544]: I0813 00:51:40.159480 1544 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:40.163428 kubelet[1544]: E0813 00:51:40.163374 1544 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-a-e4f4484119\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:40.163819 kubelet[1544]: I0813 00:51:40.163787 1544 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:40.167823 kubelet[1544]: E0813 00:51:40.167758 1544 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-a-e4f4484119\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:40.801417 kubelet[1544]: I0813 00:51:40.801352 1544 apiserver.go:52] "Watching apiserver" Aug 13 00:51:40.829659 kubelet[1544]: I0813 00:51:40.829605 1544 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:51:42.315852 systemd[1]: Reloading. Aug 13 00:51:42.445878 /usr/lib/systemd/system-generators/torcx-generator[1849]: time="2025-08-13T00:51:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:51:42.445925 /usr/lib/systemd/system-generators/torcx-generator[1849]: time="2025-08-13T00:51:42Z" level=info msg="torcx already run" Aug 13 00:51:42.583795 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:51:42.584171 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:51:42.618968 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:51:42.817803 systemd[1]: Stopping kubelet.service... Aug 13 00:51:42.839582 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:51:42.839832 systemd[1]: Stopped kubelet.service. Aug 13 00:51:42.839902 systemd[1]: kubelet.service: Consumed 2.535s CPU time. Aug 13 00:51:42.842654 systemd[1]: Starting kubelet.service... Aug 13 00:51:44.084550 systemd[1]: Started kubelet.service. Aug 13 00:51:44.248330 sudo[1907]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:51:44.248722 sudo[1907]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 13 00:51:44.259256 kubelet[1897]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:51:44.260425 kubelet[1897]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:51:44.260591 kubelet[1897]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:51:44.261250 kubelet[1897]: I0813 00:51:44.261171 1897 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:51:44.280602 kubelet[1897]: I0813 00:51:44.280463 1897 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:51:44.280963 kubelet[1897]: I0813 00:51:44.280876 1897 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:51:44.281716 kubelet[1897]: I0813 00:51:44.281679 1897 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:51:44.297361 kubelet[1897]: I0813 00:51:44.297305 1897 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 13 00:51:44.307401 kubelet[1897]: I0813 00:51:44.307346 1897 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:51:44.316367 kubelet[1897]: E0813 00:51:44.316290 1897 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:51:44.316767 kubelet[1897]: I0813 00:51:44.316741 1897 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:51:44.322265 kubelet[1897]: I0813 00:51:44.322219 1897 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:51:44.322913 kubelet[1897]: I0813 00:51:44.322808 1897 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:51:44.323557 kubelet[1897]: I0813 00:51:44.323073 1897 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-a-e4f4484119","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:51:44.323902 kubelet[1897]: I0813 00:51:44.323880 1897 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:51:44.324208 kubelet[1897]: I0813 00:51:44.324192 1897 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:51:44.324430 kubelet[1897]: I0813 00:51:44.324414 1897 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:51:44.324913 kubelet[1897]: I0813 00:51:44.324888 1897 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:51:44.327180 kubelet[1897]: I0813 00:51:44.327139 1897 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:51:44.327457 kubelet[1897]: I0813 00:51:44.327434 1897 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:51:44.327607 kubelet[1897]: I0813 00:51:44.327589 1897 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:51:44.360869 kubelet[1897]: I0813 00:51:44.359954 1897 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:51:44.363136 kubelet[1897]: I0813 00:51:44.363096 1897 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:51:44.379373 kubelet[1897]: I0813 00:51:44.379201 1897 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:51:44.379931 kubelet[1897]: I0813 00:51:44.379906 1897 server.go:1289] "Started kubelet" Aug 13 00:51:44.387447 kubelet[1897]: I0813 00:51:44.387383 1897 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:51:44.387982 kubelet[1897]: I0813 00:51:44.387263 1897 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:51:44.390633 kubelet[1897]: I0813 00:51:44.390588 1897 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:51:44.402387 kubelet[1897]: I0813 00:51:44.400374 1897 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:51:44.404301 kubelet[1897]: I0813 00:51:44.404235 1897 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:51:44.412771 kubelet[1897]: I0813 00:51:44.404402 1897 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:51:44.413911 kubelet[1897]: I0813 00:51:44.407132 1897 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:51:44.415004 kubelet[1897]: I0813 00:51:44.407157 1897 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:51:44.421551 kubelet[1897]: I0813 00:51:44.412463 1897 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:51:44.422640 kubelet[1897]: I0813 00:51:44.415499 1897 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:51:44.428229 kubelet[1897]: E0813 00:51:44.428180 1897 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:51:44.432952 kubelet[1897]: I0813 00:51:44.432904 1897 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:51:44.433485 kubelet[1897]: I0813 00:51:44.433429 1897 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:51:44.552603 kubelet[1897]: I0813 00:51:44.552557 1897 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:51:44.552908 kubelet[1897]: I0813 00:51:44.552863 1897 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:51:44.554998 kubelet[1897]: I0813 00:51:44.554926 1897 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:51:44.556065 kubelet[1897]: I0813 00:51:44.556021 1897 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:51:44.556326 kubelet[1897]: I0813 00:51:44.556247 1897 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:51:44.556664 kubelet[1897]: I0813 00:51:44.556635 1897 policy_none.go:49] "None policy: Start" Aug 13 00:51:44.556828 kubelet[1897]: I0813 00:51:44.556812 1897 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:51:44.556939 kubelet[1897]: I0813 00:51:44.556926 1897 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:51:44.557726 kubelet[1897]: I0813 00:51:44.557704 1897 state_mem.go:75] "Updated machine memory state" Aug 13 00:51:44.562908 kubelet[1897]: E0813 00:51:44.562874 1897 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:51:44.563353 kubelet[1897]: I0813 00:51:44.563330 1897 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:51:44.563538 kubelet[1897]: I0813 00:51:44.563490 1897 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:51:44.565083 kubelet[1897]: I0813 00:51:44.565056 1897 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:51:44.579454 kubelet[1897]: E0813 00:51:44.579414 1897 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:51:44.584490 kubelet[1897]: I0813 00:51:44.584438 1897 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:51:44.592538 kubelet[1897]: I0813 00:51:44.592486 1897 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:51:44.592799 kubelet[1897]: I0813 00:51:44.592778 1897 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:51:44.592936 kubelet[1897]: I0813 00:51:44.592917 1897 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:51:44.593023 kubelet[1897]: I0813 00:51:44.593010 1897 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:51:44.593232 kubelet[1897]: E0813 00:51:44.593191 1897 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Aug 13 00:51:44.677521 kubelet[1897]: I0813 00:51:44.677319 1897 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:44.687952 kubelet[1897]: I0813 00:51:44.687908 1897 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:44.688353 kubelet[1897]: I0813 00:51:44.688326 1897 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-a-e4f4484119" Aug 13 00:51:44.695170 kubelet[1897]: I0813 00:51:44.694674 1897 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:44.695407 kubelet[1897]: I0813 00:51:44.695195 1897 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:44.695645 kubelet[1897]: I0813 00:51:44.695510 1897 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:44.708390 kubelet[1897]: I0813 00:51:44.708348 1897 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 00:51:44.708988 kubelet[1897]: I0813 00:51:44.708955 1897 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 00:51:44.714709 kubelet[1897]: I0813 00:51:44.714671 1897 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 00:51:44.728352 kubelet[1897]: I0813 00:51:44.728297 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dbaeea74e777a7b7976bfa0196643f0d-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-a-e4f4484119\" (UID: \"dbaeea74e777a7b7976bfa0196643f0d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:44.728676 kubelet[1897]: I0813 00:51:44.728635 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dbaeea74e777a7b7976bfa0196643f0d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-a-e4f4484119\" (UID: \"dbaeea74e777a7b7976bfa0196643f0d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:44.728868 kubelet[1897]: I0813 00:51:44.728847 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3d0312ed039e610470ca3a1361f5ce37-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-a-e4f4484119\" (UID: \"3d0312ed039e610470ca3a1361f5ce37\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:44.729064 kubelet[1897]: I0813 00:51:44.729015 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d0312ed039e610470ca3a1361f5ce37-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-e4f4484119\" (UID: \"3d0312ed039e610470ca3a1361f5ce37\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:44.729351 kubelet[1897]: I0813 00:51:44.729262 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d0312ed039e610470ca3a1361f5ce37-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-a-e4f4484119\" (UID: \"3d0312ed039e610470ca3a1361f5ce37\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:44.729598 kubelet[1897]: I0813 00:51:44.729541 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d0312ed039e610470ca3a1361f5ce37-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-a-e4f4484119\" (UID: \"3d0312ed039e610470ca3a1361f5ce37\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:44.729874 kubelet[1897]: I0813 00:51:44.729836 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58a86ba44b38df75aaa678c7fc389126-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-a-e4f4484119\" (UID: \"58a86ba44b38df75aaa678c7fc389126\") " pod="kube-system/kube-scheduler-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:44.730097 kubelet[1897]: I0813 00:51:44.730042 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dbaeea74e777a7b7976bfa0196643f0d-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-a-e4f4484119\" (UID: \"dbaeea74e777a7b7976bfa0196643f0d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:44.730589 kubelet[1897]: I0813 00:51:44.730532 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d0312ed039e610470ca3a1361f5ce37-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-e4f4484119\" (UID: \"3d0312ed039e610470ca3a1361f5ce37\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:45.010209 kubelet[1897]: E0813 00:51:45.010128 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:45.010522 kubelet[1897]: E0813 00:51:45.010493 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:45.015617 kubelet[1897]: E0813 00:51:45.015575 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:45.187022 sudo[1907]: pam_unix(sudo:session): session closed for user root Aug 13 00:51:45.341893 kubelet[1897]: I0813 00:51:45.341730 1897 apiserver.go:52] "Watching apiserver" Aug 13 00:51:45.422015 kubelet[1897]: I0813 00:51:45.421928 1897 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:51:45.638888 kubelet[1897]: E0813 00:51:45.638730 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:45.639862 kubelet[1897]: I0813 00:51:45.639817 1897 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:45.640580 kubelet[1897]: E0813 00:51:45.640546 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:45.655983 kubelet[1897]: I0813 00:51:45.655934 1897 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 00:51:45.656237 kubelet[1897]: E0813 00:51:45.656022 1897 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-a-e4f4484119\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-a-e4f4484119" Aug 13 00:51:45.656485 kubelet[1897]: E0813 00:51:45.656441 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:45.708044 kubelet[1897]: I0813 00:51:45.707930 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-a-e4f4484119" podStartSLOduration=1.707904375 podStartE2EDuration="1.707904375s" podCreationTimestamp="2025-08-13 00:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:51:45.692546786 +0000 UTC m=+1.576453823" watchObservedRunningTime="2025-08-13 00:51:45.707904375 +0000 UTC m=+1.591811410" Aug 13 00:51:45.728310 kubelet[1897]: I0813 00:51:45.728217 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-a-e4f4484119" podStartSLOduration=1.728189663 podStartE2EDuration="1.728189663s" podCreationTimestamp="2025-08-13 00:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:51:45.710291588 +0000 UTC m=+1.594198622" watchObservedRunningTime="2025-08-13 00:51:45.728189663 +0000 UTC m=+1.612096734" Aug 13 00:51:46.641513 kubelet[1897]: E0813 00:51:46.641474 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:46.642752 kubelet[1897]: E0813 00:51:46.641975 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:47.368491 sudo[1292]: pam_unix(sudo:session): session closed for user root Aug 13 00:51:47.373146 sshd[1288]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:47.377013 systemd[1]: sshd@4-143.198.60.143:22-139.178.68.195:52378.service: Deactivated successfully. Aug 13 00:51:47.378396 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:51:47.378636 systemd[1]: session-5.scope: Consumed 7.263s CPU time. Aug 13 00:51:47.379471 systemd-logind[1175]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:51:47.380977 systemd-logind[1175]: Removed session 5. Aug 13 00:51:47.390038 kubelet[1897]: I0813 00:51:47.389992 1897 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:51:47.391049 env[1192]: time="2025-08-13T00:51:47.390984203Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:51:47.391689 kubelet[1897]: I0813 00:51:47.391479 1897 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:51:48.142178 kubelet[1897]: I0813 00:51:48.142040 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-a-e4f4484119" podStartSLOduration=4.142012461 podStartE2EDuration="4.142012461s" podCreationTimestamp="2025-08-13 00:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:51:45.729335756 +0000 UTC m=+1.613242792" watchObservedRunningTime="2025-08-13 00:51:48.142012461 +0000 UTC m=+4.025919476" Aug 13 00:51:48.154650 systemd[1]: Created slice kubepods-besteffort-pod7e7cdf2f_089d_406a_8e8e_91d28f9ab1a9.slice. Aug 13 00:51:48.179462 systemd[1]: Created slice kubepods-burstable-podc7209e85_586c_43c8_99f2_e24879211658.slice. Aug 13 00:51:48.258722 kubelet[1897]: I0813 00:51:48.258646 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-etc-cni-netd\") pod \"cilium-whmd7\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " pod="kube-system/cilium-whmd7" Aug 13 00:51:48.258942 kubelet[1897]: I0813 00:51:48.258738 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-host-proc-sys-net\") pod \"cilium-whmd7\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " pod="kube-system/cilium-whmd7" Aug 13 00:51:48.258942 kubelet[1897]: I0813 00:51:48.258770 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7209e85-586c-43c8-99f2-e24879211658-hubble-tls\") pod \"cilium-whmd7\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " pod="kube-system/cilium-whmd7" Aug 13 00:51:48.258942 kubelet[1897]: I0813 00:51:48.258824 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59xm7\" (UniqueName: \"kubernetes.io/projected/c7209e85-586c-43c8-99f2-e24879211658-kube-api-access-59xm7\") pod \"cilium-whmd7\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " pod="kube-system/cilium-whmd7" Aug 13 00:51:48.258942 kubelet[1897]: I0813 00:51:48.258882 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e7cdf2f-089d-406a-8e8e-91d28f9ab1a9-xtables-lock\") pod \"kube-proxy-c5v2t\" (UID: \"7e7cdf2f-089d-406a-8e8e-91d28f9ab1a9\") " pod="kube-system/kube-proxy-c5v2t" Aug 13 00:51:48.258942 kubelet[1897]: I0813 00:51:48.258904 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e7cdf2f-089d-406a-8e8e-91d28f9ab1a9-lib-modules\") pod \"kube-proxy-c5v2t\" (UID: \"7e7cdf2f-089d-406a-8e8e-91d28f9ab1a9\") " pod="kube-system/kube-proxy-c5v2t" Aug 13 00:51:48.258942 kubelet[1897]: I0813 00:51:48.258926 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-hostproc\") pod \"cilium-whmd7\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " pod="kube-system/cilium-whmd7" Aug 13 00:51:48.259171 kubelet[1897]: I0813 00:51:48.258969 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-lib-modules\") pod \"cilium-whmd7\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " pod="kube-system/cilium-whmd7" Aug 13 00:51:48.259171 kubelet[1897]: I0813 00:51:48.258993 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-host-proc-sys-kernel\") pod \"cilium-whmd7\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " pod="kube-system/cilium-whmd7" Aug 13 00:51:48.259171 kubelet[1897]: I0813 00:51:48.259043 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkn7h\" (UniqueName: \"kubernetes.io/projected/7e7cdf2f-089d-406a-8e8e-91d28f9ab1a9-kube-api-access-zkn7h\") pod \"kube-proxy-c5v2t\" (UID: \"7e7cdf2f-089d-406a-8e8e-91d28f9ab1a9\") " pod="kube-system/kube-proxy-c5v2t" Aug 13 00:51:48.259171 kubelet[1897]: I0813 00:51:48.259070 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-cilium-run\") pod \"cilium-whmd7\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " pod="kube-system/cilium-whmd7" Aug 13 00:51:48.259171 kubelet[1897]: I0813 00:51:48.259111 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-cni-path\") pod \"cilium-whmd7\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " pod="kube-system/cilium-whmd7" Aug 13 00:51:48.259171 kubelet[1897]: I0813 00:51:48.259137 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7209e85-586c-43c8-99f2-e24879211658-clustermesh-secrets\") pod \"cilium-whmd7\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " pod="kube-system/cilium-whmd7" Aug 13 00:51:48.259376 kubelet[1897]: I0813 00:51:48.259160 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7209e85-586c-43c8-99f2-e24879211658-cilium-config-path\") pod \"cilium-whmd7\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " pod="kube-system/cilium-whmd7" Aug 13 00:51:48.259376 kubelet[1897]: I0813 00:51:48.259202 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7e7cdf2f-089d-406a-8e8e-91d28f9ab1a9-kube-proxy\") pod \"kube-proxy-c5v2t\" (UID: \"7e7cdf2f-089d-406a-8e8e-91d28f9ab1a9\") " pod="kube-system/kube-proxy-c5v2t" Aug 13 00:51:48.259376 kubelet[1897]: I0813 00:51:48.259229 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-bpf-maps\") pod \"cilium-whmd7\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " pod="kube-system/cilium-whmd7" Aug 13 00:51:48.259376 kubelet[1897]: I0813 00:51:48.259264 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-cilium-cgroup\") pod \"cilium-whmd7\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " pod="kube-system/cilium-whmd7" Aug 13 00:51:48.259497 kubelet[1897]: I0813 00:51:48.259467 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-xtables-lock\") pod \"cilium-whmd7\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " pod="kube-system/cilium-whmd7" Aug 13 00:51:48.361345 kubelet[1897]: I0813 00:51:48.361249 1897 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 00:51:48.465711 kubelet[1897]: E0813 00:51:48.465612 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:48.467030 env[1192]: time="2025-08-13T00:51:48.466922440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c5v2t,Uid:7e7cdf2f-089d-406a-8e8e-91d28f9ab1a9,Namespace:kube-system,Attempt:0,}" Aug 13 00:51:48.488931 kubelet[1897]: E0813 00:51:48.488865 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:48.490657 env[1192]: time="2025-08-13T00:51:48.490569254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-whmd7,Uid:c7209e85-586c-43c8-99f2-e24879211658,Namespace:kube-system,Attempt:0,}" Aug 13 00:51:48.523973 env[1192]: time="2025-08-13T00:51:48.523860919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:51:48.524216 env[1192]: time="2025-08-13T00:51:48.524005912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:51:48.524216 env[1192]: time="2025-08-13T00:51:48.524073964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:51:48.524512 env[1192]: time="2025-08-13T00:51:48.524457569Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a7793e5f232fa5b28419e0625cce0188aa57baa86431d9efab749688b57fdef pid=1983 runtime=io.containerd.runc.v2 Aug 13 00:51:48.538779 env[1192]: time="2025-08-13T00:51:48.538645519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:51:48.538986 env[1192]: time="2025-08-13T00:51:48.538845400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:51:48.539053 env[1192]: time="2025-08-13T00:51:48.538933248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:51:48.539443 env[1192]: time="2025-08-13T00:51:48.539374118Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b pid=1993 runtime=io.containerd.runc.v2 Aug 13 00:51:48.564757 systemd[1]: Started cri-containerd-8a7793e5f232fa5b28419e0625cce0188aa57baa86431d9efab749688b57fdef.scope. Aug 13 00:51:48.599839 systemd[1]: Started cri-containerd-9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b.scope. Aug 13 00:51:48.636690 systemd[1]: Created slice kubepods-besteffort-podb9e56dd0_3118_4ddd_a4c4_488d58549fcf.slice. Aug 13 00:51:48.662894 kubelet[1897]: I0813 00:51:48.662842 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9e56dd0-3118-4ddd-a4c4-488d58549fcf-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-prwxn\" (UID: \"b9e56dd0-3118-4ddd-a4c4-488d58549fcf\") " pod="kube-system/cilium-operator-6c4d7847fc-prwxn" Aug 13 00:51:48.662894 kubelet[1897]: I0813 00:51:48.662891 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crslq\" (UniqueName: \"kubernetes.io/projected/b9e56dd0-3118-4ddd-a4c4-488d58549fcf-kube-api-access-crslq\") pod \"cilium-operator-6c4d7847fc-prwxn\" (UID: \"b9e56dd0-3118-4ddd-a4c4-488d58549fcf\") " pod="kube-system/cilium-operator-6c4d7847fc-prwxn" Aug 13 00:51:48.687517 env[1192]: time="2025-08-13T00:51:48.687454281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-whmd7,Uid:c7209e85-586c-43c8-99f2-e24879211658,Namespace:kube-system,Attempt:0,} returns sandbox id \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\"" Aug 13 00:51:48.689631 kubelet[1897]: E0813 00:51:48.689053 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:48.692708 env[1192]: time="2025-08-13T00:51:48.692647398Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:51:48.736921 env[1192]: time="2025-08-13T00:51:48.736478740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c5v2t,Uid:7e7cdf2f-089d-406a-8e8e-91d28f9ab1a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a7793e5f232fa5b28419e0625cce0188aa57baa86431d9efab749688b57fdef\"" Aug 13 00:51:48.739144 kubelet[1897]: E0813 00:51:48.739072 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:48.748314 env[1192]: time="2025-08-13T00:51:48.748094742Z" level=info msg="CreateContainer within sandbox \"8a7793e5f232fa5b28419e0625cce0188aa57baa86431d9efab749688b57fdef\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:51:48.774019 env[1192]: time="2025-08-13T00:51:48.773921918Z" level=info msg="CreateContainer within sandbox \"8a7793e5f232fa5b28419e0625cce0188aa57baa86431d9efab749688b57fdef\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d4d55a03d62df4f25d8a9812b61c7ed6ac8eae2891fe65ac813c8002dc4eebeb\"" Aug 13 00:51:48.775304 env[1192]: time="2025-08-13T00:51:48.775195639Z" level=info msg="StartContainer for \"d4d55a03d62df4f25d8a9812b61c7ed6ac8eae2891fe65ac813c8002dc4eebeb\"" Aug 13 00:51:48.808562 systemd[1]: Started cri-containerd-d4d55a03d62df4f25d8a9812b61c7ed6ac8eae2891fe65ac813c8002dc4eebeb.scope. Aug 13 00:51:48.868147 env[1192]: time="2025-08-13T00:51:48.868073268Z" level=info msg="StartContainer for \"d4d55a03d62df4f25d8a9812b61c7ed6ac8eae2891fe65ac813c8002dc4eebeb\" returns successfully" Aug 13 00:51:48.941783 kubelet[1897]: E0813 00:51:48.941353 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:48.942496 env[1192]: time="2025-08-13T00:51:48.942419156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-prwxn,Uid:b9e56dd0-3118-4ddd-a4c4-488d58549fcf,Namespace:kube-system,Attempt:0,}" Aug 13 00:51:48.965093 env[1192]: time="2025-08-13T00:51:48.964877636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:51:48.965093 env[1192]: time="2025-08-13T00:51:48.964958916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:51:48.965093 env[1192]: time="2025-08-13T00:51:48.964970575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:51:48.966337 env[1192]: time="2025-08-13T00:51:48.966186384Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe51e9d8c27238f37015dd6f19ab9f3b5473f6c3f85492fcfd09aa5bf5ca754c pid=2099 runtime=io.containerd.runc.v2 Aug 13 00:51:48.997682 systemd[1]: Started cri-containerd-fe51e9d8c27238f37015dd6f19ab9f3b5473f6c3f85492fcfd09aa5bf5ca754c.scope. Aug 13 00:51:49.096427 env[1192]: time="2025-08-13T00:51:49.096362987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-prwxn,Uid:b9e56dd0-3118-4ddd-a4c4-488d58549fcf,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe51e9d8c27238f37015dd6f19ab9f3b5473f6c3f85492fcfd09aa5bf5ca754c\"" Aug 13 00:51:49.098541 kubelet[1897]: E0813 00:51:49.097709 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:49.653972 kubelet[1897]: E0813 00:51:49.653915 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:50.264720 kubelet[1897]: E0813 00:51:50.261831 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:50.300954 kubelet[1897]: I0813 00:51:50.300828 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c5v2t" podStartSLOduration=2.300802871 podStartE2EDuration="2.300802871s" podCreationTimestamp="2025-08-13 00:51:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:51:49.703810007 +0000 UTC m=+5.587717041" watchObservedRunningTime="2025-08-13 00:51:50.300802871 +0000 UTC m=+6.184709913" Aug 13 00:51:50.658587 kubelet[1897]: E0813 00:51:50.657665 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:51.321867 kubelet[1897]: E0813 00:51:51.321433 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:51.662530 kubelet[1897]: E0813 00:51:51.661643 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:52.091731 kubelet[1897]: E0813 00:51:52.091669 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:52.664989 kubelet[1897]: E0813 00:51:52.664940 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:53.665742 kubelet[1897]: E0813 00:51:53.665544 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:53.778900 update_engine[1176]: I0813 00:51:53.777776 1176 update_attempter.cc:509] Updating boot flags... Aug 13 00:51:55.095793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1156223341.mount: Deactivated successfully. Aug 13 00:51:59.096395 env[1192]: time="2025-08-13T00:51:59.096306815Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:59.098942 env[1192]: time="2025-08-13T00:51:59.098885802Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:59.101044 env[1192]: time="2025-08-13T00:51:59.100987373Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:59.102059 env[1192]: time="2025-08-13T00:51:59.102009590Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 00:51:59.106062 env[1192]: time="2025-08-13T00:51:59.105027515Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:51:59.114466 env[1192]: time="2025-08-13T00:51:59.114415468Z" level=info msg="CreateContainer within sandbox \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:51:59.136052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2047819079.mount: Deactivated successfully. Aug 13 00:51:59.150057 env[1192]: time="2025-08-13T00:51:59.149942791Z" level=info msg="CreateContainer within sandbox \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b13e687e0d4b7377ee576c4f74ffc93d55515ef987ab5bd68dceb443010d67f9\"" Aug 13 00:51:59.154330 env[1192]: time="2025-08-13T00:51:59.151369469Z" level=info msg="StartContainer for \"b13e687e0d4b7377ee576c4f74ffc93d55515ef987ab5bd68dceb443010d67f9\"" Aug 13 00:51:59.188533 systemd[1]: Started cri-containerd-b13e687e0d4b7377ee576c4f74ffc93d55515ef987ab5bd68dceb443010d67f9.scope. Aug 13 00:51:59.268689 env[1192]: time="2025-08-13T00:51:59.268428341Z" level=info msg="StartContainer for \"b13e687e0d4b7377ee576c4f74ffc93d55515ef987ab5bd68dceb443010d67f9\" returns successfully" Aug 13 00:51:59.285754 systemd[1]: cri-containerd-b13e687e0d4b7377ee576c4f74ffc93d55515ef987ab5bd68dceb443010d67f9.scope: Deactivated successfully. Aug 13 00:51:59.320950 env[1192]: time="2025-08-13T00:51:59.320875150Z" level=info msg="shim disconnected" id=b13e687e0d4b7377ee576c4f74ffc93d55515ef987ab5bd68dceb443010d67f9 Aug 13 00:51:59.320950 env[1192]: time="2025-08-13T00:51:59.320952249Z" level=warning msg="cleaning up after shim disconnected" id=b13e687e0d4b7377ee576c4f74ffc93d55515ef987ab5bd68dceb443010d67f9 namespace=k8s.io Aug 13 00:51:59.321387 env[1192]: time="2025-08-13T00:51:59.320968998Z" level=info msg="cleaning up dead shim" Aug 13 00:51:59.334982 env[1192]: time="2025-08-13T00:51:59.334917503Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:51:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2329 runtime=io.containerd.runc.v2\n" Aug 13 00:51:59.697598 kubelet[1897]: E0813 00:51:59.696672 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:51:59.711606 env[1192]: time="2025-08-13T00:51:59.711530469Z" level=info msg="CreateContainer within sandbox \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:51:59.739134 env[1192]: time="2025-08-13T00:51:59.736975345Z" level=info msg="CreateContainer within sandbox \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"07b0729277eeec91ce2368566da6b95eb76381a3c11376656556fe576d35c53a\"" Aug 13 00:51:59.741672 env[1192]: time="2025-08-13T00:51:59.740780764Z" level=info msg="StartContainer for \"07b0729277eeec91ce2368566da6b95eb76381a3c11376656556fe576d35c53a\"" Aug 13 00:51:59.772393 systemd[1]: Started cri-containerd-07b0729277eeec91ce2368566da6b95eb76381a3c11376656556fe576d35c53a.scope. Aug 13 00:51:59.834006 env[1192]: time="2025-08-13T00:51:59.833919784Z" level=info msg="StartContainer for \"07b0729277eeec91ce2368566da6b95eb76381a3c11376656556fe576d35c53a\" returns successfully" Aug 13 00:51:59.852285 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:51:59.853912 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:51:59.854644 systemd[1]: Stopping systemd-sysctl.service... Aug 13 00:51:59.860937 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:51:59.861685 systemd[1]: cri-containerd-07b0729277eeec91ce2368566da6b95eb76381a3c11376656556fe576d35c53a.scope: Deactivated successfully. Aug 13 00:51:59.879729 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:51:59.906992 env[1192]: time="2025-08-13T00:51:59.906909678Z" level=info msg="shim disconnected" id=07b0729277eeec91ce2368566da6b95eb76381a3c11376656556fe576d35c53a Aug 13 00:51:59.906992 env[1192]: time="2025-08-13T00:51:59.906988926Z" level=warning msg="cleaning up after shim disconnected" id=07b0729277eeec91ce2368566da6b95eb76381a3c11376656556fe576d35c53a namespace=k8s.io Aug 13 00:51:59.906992 env[1192]: time="2025-08-13T00:51:59.907002451Z" level=info msg="cleaning up dead shim" Aug 13 00:51:59.920854 env[1192]: time="2025-08-13T00:51:59.920780597Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:51:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2390 runtime=io.containerd.runc.v2\n" Aug 13 00:52:00.131576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b13e687e0d4b7377ee576c4f74ffc93d55515ef987ab5bd68dceb443010d67f9-rootfs.mount: Deactivated successfully. Aug 13 00:52:00.655542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1313795674.mount: Deactivated successfully. Aug 13 00:52:00.700132 kubelet[1897]: E0813 00:52:00.700089 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:00.726655 env[1192]: time="2025-08-13T00:52:00.726551776Z" level=info msg="CreateContainer within sandbox \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:52:00.776619 env[1192]: time="2025-08-13T00:52:00.776549016Z" level=info msg="CreateContainer within sandbox \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1ce38a4a8d996d27f0dc1549b1cca4eb4f42f8c4b07274ab0d740a81fa1e52c0\"" Aug 13 00:52:00.779837 env[1192]: time="2025-08-13T00:52:00.779775153Z" level=info msg="StartContainer for \"1ce38a4a8d996d27f0dc1549b1cca4eb4f42f8c4b07274ab0d740a81fa1e52c0\"" Aug 13 00:52:00.827526 systemd[1]: Started cri-containerd-1ce38a4a8d996d27f0dc1549b1cca4eb4f42f8c4b07274ab0d740a81fa1e52c0.scope. Aug 13 00:52:00.888059 env[1192]: time="2025-08-13T00:52:00.887999015Z" level=info msg="StartContainer for \"1ce38a4a8d996d27f0dc1549b1cca4eb4f42f8c4b07274ab0d740a81fa1e52c0\" returns successfully" Aug 13 00:52:00.898880 systemd[1]: cri-containerd-1ce38a4a8d996d27f0dc1549b1cca4eb4f42f8c4b07274ab0d740a81fa1e52c0.scope: Deactivated successfully. Aug 13 00:52:00.948206 env[1192]: time="2025-08-13T00:52:00.948134336Z" level=info msg="shim disconnected" id=1ce38a4a8d996d27f0dc1549b1cca4eb4f42f8c4b07274ab0d740a81fa1e52c0 Aug 13 00:52:00.949851 env[1192]: time="2025-08-13T00:52:00.949699123Z" level=warning msg="cleaning up after shim disconnected" id=1ce38a4a8d996d27f0dc1549b1cca4eb4f42f8c4b07274ab0d740a81fa1e52c0 namespace=k8s.io Aug 13 00:52:00.950047 env[1192]: time="2025-08-13T00:52:00.950020606Z" level=info msg="cleaning up dead shim" Aug 13 00:52:00.966892 env[1192]: time="2025-08-13T00:52:00.966835071Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:52:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2451 runtime=io.containerd.runc.v2\n" Aug 13 00:52:01.711706 kubelet[1897]: E0813 00:52:01.707804 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:01.727710 env[1192]: time="2025-08-13T00:52:01.727619985Z" level=info msg="CreateContainer within sandbox \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:52:01.759900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2113131820.mount: Deactivated successfully. Aug 13 00:52:01.788181 env[1192]: time="2025-08-13T00:52:01.788109961Z" level=info msg="CreateContainer within sandbox \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb\"" Aug 13 00:52:01.791685 env[1192]: time="2025-08-13T00:52:01.789749227Z" level=info msg="StartContainer for \"526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb\"" Aug 13 00:52:01.840795 systemd[1]: Started cri-containerd-526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb.scope. Aug 13 00:52:01.925622 env[1192]: time="2025-08-13T00:52:01.925543452Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:01.931485 env[1192]: time="2025-08-13T00:52:01.931342082Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:01.933521 systemd[1]: cri-containerd-526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb.scope: Deactivated successfully. Aug 13 00:52:01.936791 env[1192]: time="2025-08-13T00:52:01.936658784Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:01.937096 env[1192]: time="2025-08-13T00:52:01.936475623Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7209e85_586c_43c8_99f2_e24879211658.slice/cri-containerd-526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb.scope/memory.events\": no such file or directory" Aug 13 00:52:01.938895 env[1192]: time="2025-08-13T00:52:01.938822459Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:52:01.941711 env[1192]: time="2025-08-13T00:52:01.941649917Z" level=info msg="StartContainer for \"526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb\" returns successfully" Aug 13 00:52:01.950571 env[1192]: time="2025-08-13T00:52:01.950487259Z" level=info msg="CreateContainer within sandbox \"fe51e9d8c27238f37015dd6f19ab9f3b5473f6c3f85492fcfd09aa5bf5ca754c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:52:01.986091 env[1192]: time="2025-08-13T00:52:01.985902870Z" level=info msg="CreateContainer within sandbox \"fe51e9d8c27238f37015dd6f19ab9f3b5473f6c3f85492fcfd09aa5bf5ca754c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578\"" Aug 13 00:52:01.988617 env[1192]: time="2025-08-13T00:52:01.988545328Z" level=info msg="StartContainer for \"147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578\"" Aug 13 00:52:02.046249 env[1192]: time="2025-08-13T00:52:02.046187800Z" level=info msg="shim disconnected" id=526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb Aug 13 00:52:02.046885 env[1192]: time="2025-08-13T00:52:02.046840744Z" level=warning msg="cleaning up after shim disconnected" id=526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb namespace=k8s.io Aug 13 00:52:02.047083 env[1192]: time="2025-08-13T00:52:02.047057167Z" level=info msg="cleaning up dead shim" Aug 13 00:52:02.072828 systemd[1]: Started cri-containerd-147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578.scope. Aug 13 00:52:02.082196 env[1192]: time="2025-08-13T00:52:02.082133016Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:52:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2515 runtime=io.containerd.runc.v2\n" Aug 13 00:52:02.136860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb-rootfs.mount: Deactivated successfully. Aug 13 00:52:02.153658 env[1192]: time="2025-08-13T00:52:02.153519401Z" level=info msg="StartContainer for \"147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578\" returns successfully" Aug 13 00:52:02.718440 kubelet[1897]: E0813 00:52:02.718385 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:02.728398 env[1192]: time="2025-08-13T00:52:02.728322123Z" level=info msg="CreateContainer within sandbox \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:52:02.729032 kubelet[1897]: E0813 00:52:02.728491 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:02.750318 env[1192]: time="2025-08-13T00:52:02.748346405Z" level=info msg="CreateContainer within sandbox \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea\"" Aug 13 00:52:02.751496 env[1192]: time="2025-08-13T00:52:02.751248180Z" level=info msg="StartContainer for \"164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea\"" Aug 13 00:52:02.761503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1936481847.mount: Deactivated successfully. Aug 13 00:52:02.825982 systemd[1]: Started cri-containerd-164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea.scope. Aug 13 00:52:02.998009 env[1192]: time="2025-08-13T00:52:02.997851024Z" level=info msg="StartContainer for \"164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea\" returns successfully" Aug 13 00:52:03.134072 systemd[1]: run-containerd-runc-k8s.io-164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea-runc.NkPr20.mount: Deactivated successfully. Aug 13 00:52:03.537133 kubelet[1897]: I0813 00:52:03.535880 1897 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:52:03.735477 kubelet[1897]: E0813 00:52:03.735428 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:03.737541 kubelet[1897]: E0813 00:52:03.737117 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:03.790411 kubelet[1897]: I0813 00:52:03.790203 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-prwxn" podStartSLOduration=2.9509214740000003 podStartE2EDuration="15.790164931s" podCreationTimestamp="2025-08-13 00:51:48 +0000 UTC" firstStartedPulling="2025-08-13 00:51:49.101327618 +0000 UTC m=+4.985234628" lastFinishedPulling="2025-08-13 00:52:01.940571071 +0000 UTC m=+17.824478085" observedRunningTime="2025-08-13 00:52:03.121673773 +0000 UTC m=+19.005580825" watchObservedRunningTime="2025-08-13 00:52:03.790164931 +0000 UTC m=+19.674071968" Aug 13 00:52:03.815055 systemd[1]: Created slice kubepods-burstable-podc1a4f449_32be_4ebf_9081_c86ab3294d58.slice. Aug 13 00:52:03.829092 systemd[1]: Created slice kubepods-burstable-pod29daabea_6e54_4075_8ca5_9765118deae5.slice. Aug 13 00:52:03.893507 kubelet[1897]: I0813 00:52:03.893336 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd2x6\" (UniqueName: \"kubernetes.io/projected/29daabea-6e54-4075-8ca5-9765118deae5-kube-api-access-wd2x6\") pod \"coredns-674b8bbfcf-cq5ss\" (UID: \"29daabea-6e54-4075-8ca5-9765118deae5\") " pod="kube-system/coredns-674b8bbfcf-cq5ss" Aug 13 00:52:03.894020 kubelet[1897]: I0813 00:52:03.893982 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1a4f449-32be-4ebf-9081-c86ab3294d58-config-volume\") pod \"coredns-674b8bbfcf-4m2zc\" (UID: \"c1a4f449-32be-4ebf-9081-c86ab3294d58\") " pod="kube-system/coredns-674b8bbfcf-4m2zc" Aug 13 00:52:03.894154 kubelet[1897]: I0813 00:52:03.894046 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmvjk\" (UniqueName: \"kubernetes.io/projected/c1a4f449-32be-4ebf-9081-c86ab3294d58-kube-api-access-pmvjk\") pod \"coredns-674b8bbfcf-4m2zc\" (UID: \"c1a4f449-32be-4ebf-9081-c86ab3294d58\") " pod="kube-system/coredns-674b8bbfcf-4m2zc" Aug 13 00:52:03.894154 kubelet[1897]: I0813 00:52:03.894078 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29daabea-6e54-4075-8ca5-9765118deae5-config-volume\") pod \"coredns-674b8bbfcf-cq5ss\" (UID: \"29daabea-6e54-4075-8ca5-9765118deae5\") " pod="kube-system/coredns-674b8bbfcf-cq5ss" Aug 13 00:52:03.894935 kubelet[1897]: I0813 00:52:03.894718 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-whmd7" podStartSLOduration=5.482904844 podStartE2EDuration="15.894682434s" podCreationTimestamp="2025-08-13 00:51:48 +0000 UTC" firstStartedPulling="2025-08-13 00:51:48.69191376 +0000 UTC m=+4.575820790" lastFinishedPulling="2025-08-13 00:51:59.103691371 +0000 UTC m=+14.987598380" observedRunningTime="2025-08-13 00:52:03.893731132 +0000 UTC m=+19.777638167" watchObservedRunningTime="2025-08-13 00:52:03.894682434 +0000 UTC m=+19.778589466" Aug 13 00:52:04.122149 kubelet[1897]: E0813 00:52:04.122016 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:04.124471 env[1192]: time="2025-08-13T00:52:04.123589758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4m2zc,Uid:c1a4f449-32be-4ebf-9081-c86ab3294d58,Namespace:kube-system,Attempt:0,}" Aug 13 00:52:04.147216 kubelet[1897]: E0813 00:52:04.147177 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:04.148675 env[1192]: time="2025-08-13T00:52:04.148622720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cq5ss,Uid:29daabea-6e54-4075-8ca5-9765118deae5,Namespace:kube-system,Attempt:0,}" Aug 13 00:52:04.737965 kubelet[1897]: E0813 00:52:04.737925 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:05.740791 kubelet[1897]: E0813 00:52:05.740735 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:06.443546 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Aug 13 00:52:06.443932 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Aug 13 00:52:06.441723 systemd-networkd[1002]: cilium_host: Link UP Aug 13 00:52:06.441998 systemd-networkd[1002]: cilium_net: Link UP Aug 13 00:52:06.444458 systemd-networkd[1002]: cilium_net: Gained carrier Aug 13 00:52:06.445818 systemd-networkd[1002]: cilium_host: Gained carrier Aug 13 00:52:06.669730 systemd-networkd[1002]: cilium_vxlan: Link UP Aug 13 00:52:06.669743 systemd-networkd[1002]: cilium_vxlan: Gained carrier Aug 13 00:52:06.744216 kubelet[1897]: E0813 00:52:06.744043 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:06.835549 systemd-networkd[1002]: cilium_host: Gained IPv6LL Aug 13 00:52:06.907561 systemd-networkd[1002]: cilium_net: Gained IPv6LL Aug 13 00:52:07.192317 kernel: NET: Registered PF_ALG protocol family Aug 13 00:52:08.003507 systemd-networkd[1002]: cilium_vxlan: Gained IPv6LL Aug 13 00:52:08.378503 systemd-networkd[1002]: lxc_health: Link UP Aug 13 00:52:08.413548 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:52:08.413162 systemd-networkd[1002]: lxc_health: Gained carrier Aug 13 00:52:08.492493 kubelet[1897]: E0813 00:52:08.492235 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:08.728713 systemd-networkd[1002]: lxceb1879e09e3d: Link UP Aug 13 00:52:08.739541 kernel: eth0: renamed from tmp20adc Aug 13 00:52:08.743681 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxceb1879e09e3d: link becomes ready Aug 13 00:52:08.743211 systemd-networkd[1002]: lxceb1879e09e3d: Gained carrier Aug 13 00:52:08.760351 kubelet[1897]: E0813 00:52:08.759706 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:08.772353 systemd-networkd[1002]: lxc011a6ee2693a: Link UP Aug 13 00:52:08.785729 kernel: eth0: renamed from tmp75a59 Aug 13 00:52:08.790834 systemd-networkd[1002]: lxc011a6ee2693a: Gained carrier Aug 13 00:52:08.791364 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc011a6ee2693a: link becomes ready Aug 13 00:52:10.115713 systemd-networkd[1002]: lxceb1879e09e3d: Gained IPv6LL Aug 13 00:52:10.116171 systemd-networkd[1002]: lxc_health: Gained IPv6LL Aug 13 00:52:10.563597 systemd-networkd[1002]: lxc011a6ee2693a: Gained IPv6LL Aug 13 00:52:14.401401 env[1192]: time="2025-08-13T00:52:14.399649513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:52:14.401401 env[1192]: time="2025-08-13T00:52:14.399751367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:52:14.401401 env[1192]: time="2025-08-13T00:52:14.399767008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:52:14.401401 env[1192]: time="2025-08-13T00:52:14.400077906Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/20adc89a946824691290b1d0e6b61d386640cf247855d688f918f369e8af1887 pid=3100 runtime=io.containerd.runc.v2 Aug 13 00:52:14.436480 systemd[1]: Started cri-containerd-20adc89a946824691290b1d0e6b61d386640cf247855d688f918f369e8af1887.scope. Aug 13 00:52:14.475935 env[1192]: time="2025-08-13T00:52:14.475813485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:52:14.476239 env[1192]: time="2025-08-13T00:52:14.476200880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:52:14.476435 env[1192]: time="2025-08-13T00:52:14.476400240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:52:14.476867 env[1192]: time="2025-08-13T00:52:14.476796062Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/75a59562bfc9e3ec727760d1f36197526788c7cb1f8fcf7642f0aa86675869e5 pid=3131 runtime=io.containerd.runc.v2 Aug 13 00:52:14.534416 systemd[1]: Started cri-containerd-75a59562bfc9e3ec727760d1f36197526788c7cb1f8fcf7642f0aa86675869e5.scope. Aug 13 00:52:14.540097 systemd[1]: run-containerd-runc-k8s.io-75a59562bfc9e3ec727760d1f36197526788c7cb1f8fcf7642f0aa86675869e5-runc.bfRiRC.mount: Deactivated successfully. Aug 13 00:52:14.590797 env[1192]: time="2025-08-13T00:52:14.590747383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4m2zc,Uid:c1a4f449-32be-4ebf-9081-c86ab3294d58,Namespace:kube-system,Attempt:0,} returns sandbox id \"20adc89a946824691290b1d0e6b61d386640cf247855d688f918f369e8af1887\"" Aug 13 00:52:14.592848 kubelet[1897]: E0813 00:52:14.592183 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:14.612788 env[1192]: time="2025-08-13T00:52:14.612700092Z" level=info msg="CreateContainer within sandbox \"20adc89a946824691290b1d0e6b61d386640cf247855d688f918f369e8af1887\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:52:14.650527 env[1192]: time="2025-08-13T00:52:14.650444834Z" level=info msg="CreateContainer within sandbox \"20adc89a946824691290b1d0e6b61d386640cf247855d688f918f369e8af1887\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a8dd682830f0f5ddd17bb30ef9379028b4c3982f531ee4f53ddcde1594b1f9a\"" Aug 13 00:52:14.652169 env[1192]: time="2025-08-13T00:52:14.652010062Z" level=info msg="StartContainer for \"8a8dd682830f0f5ddd17bb30ef9379028b4c3982f531ee4f53ddcde1594b1f9a\"" Aug 13 00:52:14.701257 systemd[1]: Started cri-containerd-8a8dd682830f0f5ddd17bb30ef9379028b4c3982f531ee4f53ddcde1594b1f9a.scope. Aug 13 00:52:14.770043 env[1192]: time="2025-08-13T00:52:14.769983635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cq5ss,Uid:29daabea-6e54-4075-8ca5-9765118deae5,Namespace:kube-system,Attempt:0,} returns sandbox id \"75a59562bfc9e3ec727760d1f36197526788c7cb1f8fcf7642f0aa86675869e5\"" Aug 13 00:52:14.771590 kubelet[1897]: E0813 00:52:14.771549 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:14.788940 env[1192]: time="2025-08-13T00:52:14.788871396Z" level=info msg="CreateContainer within sandbox \"75a59562bfc9e3ec727760d1f36197526788c7cb1f8fcf7642f0aa86675869e5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:52:14.816331 env[1192]: time="2025-08-13T00:52:14.816229166Z" level=info msg="CreateContainer within sandbox \"75a59562bfc9e3ec727760d1f36197526788c7cb1f8fcf7642f0aa86675869e5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66b30c04c719eea353e87b8185c0db240d355035d30c2096c49d30b5dbe03567\"" Aug 13 00:52:14.817595 env[1192]: time="2025-08-13T00:52:14.817378899Z" level=info msg="StartContainer for \"66b30c04c719eea353e87b8185c0db240d355035d30c2096c49d30b5dbe03567\"" Aug 13 00:52:14.850752 env[1192]: time="2025-08-13T00:52:14.850694515Z" level=info msg="StartContainer for \"8a8dd682830f0f5ddd17bb30ef9379028b4c3982f531ee4f53ddcde1594b1f9a\" returns successfully" Aug 13 00:52:14.893706 systemd[1]: Started cri-containerd-66b30c04c719eea353e87b8185c0db240d355035d30c2096c49d30b5dbe03567.scope. Aug 13 00:52:14.948518 env[1192]: time="2025-08-13T00:52:14.948425557Z" level=info msg="StartContainer for \"66b30c04c719eea353e87b8185c0db240d355035d30c2096c49d30b5dbe03567\" returns successfully" Aug 13 00:52:15.807115 kubelet[1897]: E0813 00:52:15.807052 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:15.812555 kubelet[1897]: E0813 00:52:15.812476 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:15.844951 kubelet[1897]: I0813 00:52:15.844863 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cq5ss" podStartSLOduration=27.844838416 podStartE2EDuration="27.844838416s" podCreationTimestamp="2025-08-13 00:51:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:52:15.828313001 +0000 UTC m=+31.712220032" watchObservedRunningTime="2025-08-13 00:52:15.844838416 +0000 UTC m=+31.728745448" Aug 13 00:52:16.815731 kubelet[1897]: E0813 00:52:16.815688 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:16.816666 kubelet[1897]: E0813 00:52:16.816559 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:17.818999 kubelet[1897]: E0813 00:52:17.818963 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:17.819813 kubelet[1897]: E0813 00:52:17.819782 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:52:23.952620 systemd[1]: Started sshd@5-143.198.60.143:22-139.178.68.195:33526.service. Aug 13 00:52:24.017633 sshd[3259]: Accepted publickey for core from 139.178.68.195 port 33526 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:52:24.021922 sshd[3259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:52:24.032060 systemd[1]: Started session-6.scope. Aug 13 00:52:24.032993 systemd-logind[1175]: New session 6 of user core. Aug 13 00:52:24.372769 sshd[3259]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:24.383399 systemd[1]: sshd@5-143.198.60.143:22-139.178.68.195:33526.service: Deactivated successfully. Aug 13 00:52:24.383492 systemd-logind[1175]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:52:24.385541 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:52:24.388147 systemd-logind[1175]: Removed session 6. Aug 13 00:52:29.382858 systemd[1]: Started sshd@6-143.198.60.143:22-139.178.68.195:33538.service. Aug 13 00:52:29.441583 sshd[3273]: Accepted publickey for core from 139.178.68.195 port 33538 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:52:29.443511 sshd[3273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:52:29.451448 systemd-logind[1175]: New session 7 of user core. Aug 13 00:52:29.452352 systemd[1]: Started session-7.scope. Aug 13 00:52:29.622121 sshd[3273]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:29.627670 systemd[1]: sshd@6-143.198.60.143:22-139.178.68.195:33538.service: Deactivated successfully. Aug 13 00:52:29.629044 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:52:29.630709 systemd-logind[1175]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:52:29.633370 systemd-logind[1175]: Removed session 7. Aug 13 00:52:34.645865 systemd[1]: Started sshd@7-143.198.60.143:22-139.178.68.195:55158.service. Aug 13 00:52:34.709632 sshd[3285]: Accepted publickey for core from 139.178.68.195 port 55158 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:52:34.712425 sshd[3285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:52:34.722752 systemd-logind[1175]: New session 8 of user core. Aug 13 00:52:34.723753 systemd[1]: Started session-8.scope. Aug 13 00:52:34.929293 sshd[3285]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:34.934702 systemd-logind[1175]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:52:34.936956 systemd[1]: sshd@7-143.198.60.143:22-139.178.68.195:55158.service: Deactivated successfully. Aug 13 00:52:34.938190 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:52:34.941115 systemd-logind[1175]: Removed session 8. Aug 13 00:52:39.942351 systemd[1]: Started sshd@8-143.198.60.143:22-139.178.68.195:33140.service. Aug 13 00:52:40.001952 sshd[3298]: Accepted publickey for core from 139.178.68.195 port 33140 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:52:40.004266 sshd[3298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:52:40.012442 systemd-logind[1175]: New session 9 of user core. Aug 13 00:52:40.012748 systemd[1]: Started session-9.scope. Aug 13 00:52:40.198099 sshd[3298]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:40.212900 systemd[1]: Started sshd@9-143.198.60.143:22-139.178.68.195:33142.service. Aug 13 00:52:40.220863 systemd[1]: sshd@8-143.198.60.143:22-139.178.68.195:33140.service: Deactivated successfully. Aug 13 00:52:40.222643 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:52:40.225890 systemd-logind[1175]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:52:40.228967 systemd-logind[1175]: Removed session 9. Aug 13 00:52:40.282614 sshd[3310]: Accepted publickey for core from 139.178.68.195 port 33142 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:52:40.285264 sshd[3310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:52:40.293699 systemd[1]: Started session-10.scope. Aug 13 00:52:40.295042 systemd-logind[1175]: New session 10 of user core. Aug 13 00:52:40.614091 sshd[3310]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:40.626079 systemd[1]: Started sshd@10-143.198.60.143:22-139.178.68.195:33158.service. Aug 13 00:52:40.630411 systemd[1]: sshd@9-143.198.60.143:22-139.178.68.195:33142.service: Deactivated successfully. Aug 13 00:52:40.633879 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:52:40.639035 systemd-logind[1175]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:52:40.642240 systemd-logind[1175]: Removed session 10. Aug 13 00:52:40.698969 sshd[3320]: Accepted publickey for core from 139.178.68.195 port 33158 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:52:40.701637 sshd[3320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:52:40.709553 systemd-logind[1175]: New session 11 of user core. Aug 13 00:52:40.710547 systemd[1]: Started session-11.scope. Aug 13 00:52:40.912978 sshd[3320]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:40.919206 systemd-logind[1175]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:52:40.920097 systemd[1]: sshd@10-143.198.60.143:22-139.178.68.195:33158.service: Deactivated successfully. Aug 13 00:52:40.921309 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:52:40.923063 systemd-logind[1175]: Removed session 11. Aug 13 00:52:45.924644 systemd[1]: Started sshd@11-143.198.60.143:22-139.178.68.195:33162.service. Aug 13 00:52:45.978233 sshd[3336]: Accepted publickey for core from 139.178.68.195 port 33162 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:52:45.981714 sshd[3336]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:52:45.988382 systemd-logind[1175]: New session 12 of user core. Aug 13 00:52:45.989434 systemd[1]: Started session-12.scope. Aug 13 00:52:46.166404 sshd[3336]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:46.171723 systemd[1]: sshd@11-143.198.60.143:22-139.178.68.195:33162.service: Deactivated successfully. Aug 13 00:52:46.173207 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:52:46.175158 systemd-logind[1175]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:52:46.177547 systemd-logind[1175]: Removed session 12. Aug 13 00:52:51.176836 systemd[1]: Started sshd@12-143.198.60.143:22-139.178.68.195:50964.service. Aug 13 00:52:51.233717 sshd[3350]: Accepted publickey for core from 139.178.68.195 port 50964 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:52:51.236829 sshd[3350]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:52:51.245583 systemd-logind[1175]: New session 13 of user core. Aug 13 00:52:51.245819 systemd[1]: Started session-13.scope. Aug 13 00:52:51.421640 sshd[3350]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:51.427506 systemd[1]: sshd@12-143.198.60.143:22-139.178.68.195:50964.service: Deactivated successfully. Aug 13 00:52:51.428767 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:52:51.430422 systemd-logind[1175]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:52:51.432037 systemd-logind[1175]: Removed session 13. Aug 13 00:52:56.431750 systemd[1]: Started sshd@13-143.198.60.143:22-139.178.68.195:50970.service. Aug 13 00:52:56.484924 sshd[3362]: Accepted publickey for core from 139.178.68.195 port 50970 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:52:56.488519 sshd[3362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:52:56.500072 systemd-logind[1175]: New session 14 of user core. Aug 13 00:52:56.501862 systemd[1]: Started session-14.scope. Aug 13 00:52:56.699659 sshd[3362]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:56.711445 systemd[1]: Started sshd@14-143.198.60.143:22-139.178.68.195:50984.service. Aug 13 00:52:56.712830 systemd[1]: sshd@13-143.198.60.143:22-139.178.68.195:50970.service: Deactivated successfully. Aug 13 00:52:56.715168 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:52:56.718484 systemd-logind[1175]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:52:56.721282 systemd-logind[1175]: Removed session 14. Aug 13 00:52:56.782939 sshd[3373]: Accepted publickey for core from 139.178.68.195 port 50984 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:52:56.786603 sshd[3373]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:52:56.796301 systemd[1]: Started session-15.scope. Aug 13 00:52:56.796896 systemd-logind[1175]: New session 15 of user core. Aug 13 00:52:57.307106 sshd[3373]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:57.314965 systemd[1]: sshd@14-143.198.60.143:22-139.178.68.195:50984.service: Deactivated successfully. Aug 13 00:52:57.317630 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:52:57.321705 systemd-logind[1175]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:52:57.323916 systemd[1]: Started sshd@15-143.198.60.143:22-139.178.68.195:50990.service. Aug 13 00:52:57.327693 systemd-logind[1175]: Removed session 15. Aug 13 00:52:57.388442 sshd[3384]: Accepted publickey for core from 139.178.68.195 port 50990 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:52:57.391173 sshd[3384]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:52:57.401547 systemd-logind[1175]: New session 16 of user core. Aug 13 00:52:57.401858 systemd[1]: Started session-16.scope. Aug 13 00:52:58.405929 sshd[3384]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:58.419891 systemd[1]: Started sshd@16-143.198.60.143:22-139.178.68.195:51000.service. Aug 13 00:52:58.421055 systemd[1]: sshd@15-143.198.60.143:22-139.178.68.195:50990.service: Deactivated successfully. Aug 13 00:52:58.423066 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:52:58.434024 systemd-logind[1175]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:52:58.436491 systemd-logind[1175]: Removed session 16. Aug 13 00:52:58.487656 sshd[3399]: Accepted publickey for core from 139.178.68.195 port 51000 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:52:58.490452 sshd[3399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:52:58.499218 systemd[1]: Started session-17.scope. Aug 13 00:52:58.500470 systemd-logind[1175]: New session 17 of user core. Aug 13 00:52:59.000235 sshd[3399]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:59.014425 systemd[1]: Started sshd@17-143.198.60.143:22-139.178.68.195:51010.service. Aug 13 00:52:59.015487 systemd[1]: sshd@16-143.198.60.143:22-139.178.68.195:51000.service: Deactivated successfully. Aug 13 00:52:59.019115 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:52:59.023763 systemd-logind[1175]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:52:59.026740 systemd-logind[1175]: Removed session 17. Aug 13 00:52:59.080122 sshd[3411]: Accepted publickey for core from 139.178.68.195 port 51010 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:52:59.083168 sshd[3411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:52:59.092244 systemd[1]: Started session-18.scope. Aug 13 00:52:59.093598 systemd-logind[1175]: New session 18 of user core. Aug 13 00:52:59.310201 sshd[3411]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:59.316207 systemd[1]: sshd@17-143.198.60.143:22-139.178.68.195:51010.service: Deactivated successfully. Aug 13 00:52:59.317668 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:52:59.319948 systemd-logind[1175]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:52:59.321202 systemd-logind[1175]: Removed session 18. Aug 13 00:53:02.606812 kubelet[1897]: E0813 00:53:02.606755 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:04.320015 systemd[1]: Started sshd@18-143.198.60.143:22-139.178.68.195:35512.service. Aug 13 00:53:04.373981 sshd[3423]: Accepted publickey for core from 139.178.68.195 port 35512 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:04.376308 sshd[3423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:04.384580 systemd[1]: Started session-19.scope. Aug 13 00:53:04.385564 systemd-logind[1175]: New session 19 of user core. Aug 13 00:53:04.575789 sshd[3423]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:04.581616 systemd[1]: sshd@18-143.198.60.143:22-139.178.68.195:35512.service: Deactivated successfully. Aug 13 00:53:04.582662 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:53:04.583935 systemd-logind[1175]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:53:04.586363 systemd-logind[1175]: Removed session 19. Aug 13 00:53:09.586236 systemd[1]: Started sshd@19-143.198.60.143:22-139.178.68.195:35514.service. Aug 13 00:53:09.642935 sshd[3437]: Accepted publickey for core from 139.178.68.195 port 35514 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:09.645368 sshd[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:09.654563 systemd[1]: Started session-20.scope. Aug 13 00:53:09.656409 systemd-logind[1175]: New session 20 of user core. Aug 13 00:53:09.829325 sshd[3437]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:09.834307 systemd[1]: sshd@19-143.198.60.143:22-139.178.68.195:35514.service: Deactivated successfully. Aug 13 00:53:09.835726 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:53:09.837211 systemd-logind[1175]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:53:09.839234 systemd-logind[1175]: Removed session 20. Aug 13 00:53:11.594945 kubelet[1897]: E0813 00:53:11.594890 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:14.839422 systemd[1]: Started sshd@20-143.198.60.143:22-139.178.68.195:52552.service. Aug 13 00:53:14.890909 sshd[3448]: Accepted publickey for core from 139.178.68.195 port 52552 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:14.894554 sshd[3448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:14.904016 systemd[1]: Started session-21.scope. Aug 13 00:53:14.906314 systemd-logind[1175]: New session 21 of user core. Aug 13 00:53:15.091297 sshd[3448]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:15.096170 systemd-logind[1175]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:53:15.097090 systemd[1]: sshd@20-143.198.60.143:22-139.178.68.195:52552.service: Deactivated successfully. Aug 13 00:53:15.098423 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:53:15.100325 systemd-logind[1175]: Removed session 21. Aug 13 00:53:16.595401 kubelet[1897]: E0813 00:53:16.595288 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:20.103076 systemd[1]: Started sshd@21-143.198.60.143:22-139.178.68.195:60666.service. Aug 13 00:53:20.170623 sshd[3461]: Accepted publickey for core from 139.178.68.195 port 60666 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:20.174352 sshd[3461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:20.185111 systemd[1]: Started session-22.scope. Aug 13 00:53:20.186578 systemd-logind[1175]: New session 22 of user core. Aug 13 00:53:20.370964 sshd[3461]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:20.377178 systemd[1]: sshd@21-143.198.60.143:22-139.178.68.195:60666.service: Deactivated successfully. Aug 13 00:53:20.378505 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:53:20.379665 systemd-logind[1175]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:53:20.381444 systemd-logind[1175]: Removed session 22. Aug 13 00:53:20.595083 kubelet[1897]: E0813 00:53:20.595026 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:21.594820 kubelet[1897]: E0813 00:53:21.594631 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:22.595004 kubelet[1897]: E0813 00:53:22.594941 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:24.595113 kubelet[1897]: E0813 00:53:24.595051 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:24.596392 kubelet[1897]: E0813 00:53:24.596331 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:25.381902 systemd[1]: Started sshd@22-143.198.60.143:22-139.178.68.195:60674.service. Aug 13 00:53:25.439104 sshd[3473]: Accepted publickey for core from 139.178.68.195 port 60674 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:25.442993 sshd[3473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:25.452924 systemd[1]: Started session-23.scope. Aug 13 00:53:25.454165 systemd-logind[1175]: New session 23 of user core. Aug 13 00:53:25.615785 sshd[3473]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:25.622028 systemd[1]: sshd@22-143.198.60.143:22-139.178.68.195:60674.service: Deactivated successfully. Aug 13 00:53:25.624410 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:53:25.626845 systemd-logind[1175]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:53:25.628630 systemd-logind[1175]: Removed session 23. Aug 13 00:53:25.632582 systemd[1]: Started sshd@23-143.198.60.143:22-139.178.68.195:60678.service. Aug 13 00:53:25.689126 sshd[3485]: Accepted publickey for core from 139.178.68.195 port 60678 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:25.690989 sshd[3485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:25.701731 systemd[1]: Started session-24.scope. Aug 13 00:53:25.702621 systemd-logind[1175]: New session 24 of user core. Aug 13 00:53:27.418330 kubelet[1897]: I0813 00:53:27.418213 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4m2zc" podStartSLOduration=99.418156794 podStartE2EDuration="1m39.418156794s" podCreationTimestamp="2025-08-13 00:51:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:52:15.868418325 +0000 UTC m=+31.752325360" watchObservedRunningTime="2025-08-13 00:53:27.418156794 +0000 UTC m=+103.302063823" Aug 13 00:53:27.492466 env[1192]: time="2025-08-13T00:53:27.492399422Z" level=info msg="StopContainer for \"147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578\" with timeout 30 (s)" Aug 13 00:53:27.493362 env[1192]: time="2025-08-13T00:53:27.493313264Z" level=info msg="Stop container \"147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578\" with signal terminated" Aug 13 00:53:27.527761 systemd[1]: run-containerd-runc-k8s.io-164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea-runc.OM0UL1.mount: Deactivated successfully. Aug 13 00:53:27.613989 env[1192]: time="2025-08-13T00:53:27.613890914Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:53:27.638325 env[1192]: time="2025-08-13T00:53:27.638249002Z" level=info msg="StopContainer for \"164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea\" with timeout 2 (s)" Aug 13 00:53:27.639146 env[1192]: time="2025-08-13T00:53:27.639099898Z" level=info msg="Stop container \"164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea\" with signal terminated" Aug 13 00:53:27.658165 systemd-networkd[1002]: lxc_health: Link DOWN Aug 13 00:53:27.658175 systemd-networkd[1002]: lxc_health: Lost carrier Aug 13 00:53:27.691243 systemd[1]: cri-containerd-147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578.scope: Deactivated successfully. Aug 13 00:53:27.715969 systemd[1]: cri-containerd-164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea.scope: Deactivated successfully. Aug 13 00:53:27.716527 systemd[1]: cri-containerd-164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea.scope: Consumed 10.202s CPU time. Aug 13 00:53:27.754056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578-rootfs.mount: Deactivated successfully. Aug 13 00:53:27.768602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea-rootfs.mount: Deactivated successfully. Aug 13 00:53:27.773529 env[1192]: time="2025-08-13T00:53:27.773457943Z" level=info msg="shim disconnected" id=147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578 Aug 13 00:53:27.773529 env[1192]: time="2025-08-13T00:53:27.773528159Z" level=warning msg="cleaning up after shim disconnected" id=147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578 namespace=k8s.io Aug 13 00:53:27.773529 env[1192]: time="2025-08-13T00:53:27.773540498Z" level=info msg="cleaning up dead shim" Aug 13 00:53:27.774900 env[1192]: time="2025-08-13T00:53:27.774830901Z" level=info msg="shim disconnected" id=164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea Aug 13 00:53:27.774900 env[1192]: time="2025-08-13T00:53:27.774895247Z" level=warning msg="cleaning up after shim disconnected" id=164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea namespace=k8s.io Aug 13 00:53:27.774900 env[1192]: time="2025-08-13T00:53:27.774910232Z" level=info msg="cleaning up dead shim" Aug 13 00:53:27.788614 env[1192]: time="2025-08-13T00:53:27.788539652Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:53:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3551 runtime=io.containerd.runc.v2\n" Aug 13 00:53:27.791132 env[1192]: time="2025-08-13T00:53:27.791053479Z" level=info msg="StopContainer for \"147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578\" returns successfully" Aug 13 00:53:27.795788 env[1192]: time="2025-08-13T00:53:27.795704490Z" level=info msg="StopPodSandbox for \"fe51e9d8c27238f37015dd6f19ab9f3b5473f6c3f85492fcfd09aa5bf5ca754c\"" Aug 13 00:53:27.796192 env[1192]: time="2025-08-13T00:53:27.795814915Z" level=info msg="Container to stop \"147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:53:27.797466 env[1192]: time="2025-08-13T00:53:27.797393780Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:53:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3552 runtime=io.containerd.runc.v2\n" Aug 13 00:53:27.800202 env[1192]: time="2025-08-13T00:53:27.800130602Z" level=info msg="StopContainer for \"164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea\" returns successfully" Aug 13 00:53:27.801142 env[1192]: time="2025-08-13T00:53:27.801086363Z" level=info msg="StopPodSandbox for \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\"" Aug 13 00:53:27.801378 env[1192]: time="2025-08-13T00:53:27.801189315Z" level=info msg="Container to stop \"07b0729277eeec91ce2368566da6b95eb76381a3c11376656556fe576d35c53a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:53:27.801378 env[1192]: time="2025-08-13T00:53:27.801215038Z" level=info msg="Container to stop \"164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:53:27.801378 env[1192]: time="2025-08-13T00:53:27.801251428Z" level=info msg="Container to stop \"b13e687e0d4b7377ee576c4f74ffc93d55515ef987ab5bd68dceb443010d67f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:53:27.801378 env[1192]: time="2025-08-13T00:53:27.801270436Z" level=info msg="Container to stop \"1ce38a4a8d996d27f0dc1549b1cca4eb4f42f8c4b07274ab0d740a81fa1e52c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:53:27.801703 env[1192]: time="2025-08-13T00:53:27.801379272Z" level=info msg="Container to stop \"526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:53:27.809751 systemd[1]: cri-containerd-fe51e9d8c27238f37015dd6f19ab9f3b5473f6c3f85492fcfd09aa5bf5ca754c.scope: Deactivated successfully. Aug 13 00:53:27.811901 systemd[1]: cri-containerd-9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b.scope: Deactivated successfully. Aug 13 00:53:27.858655 env[1192]: time="2025-08-13T00:53:27.858581636Z" level=info msg="shim disconnected" id=fe51e9d8c27238f37015dd6f19ab9f3b5473f6c3f85492fcfd09aa5bf5ca754c Aug 13 00:53:27.860696 env[1192]: time="2025-08-13T00:53:27.860637551Z" level=warning msg="cleaning up after shim disconnected" id=fe51e9d8c27238f37015dd6f19ab9f3b5473f6c3f85492fcfd09aa5bf5ca754c namespace=k8s.io Aug 13 00:53:27.861367 env[1192]: time="2025-08-13T00:53:27.860848750Z" level=info msg="cleaning up dead shim" Aug 13 00:53:27.865448 env[1192]: time="2025-08-13T00:53:27.865266694Z" level=info msg="shim disconnected" id=9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b Aug 13 00:53:27.865751 env[1192]: time="2025-08-13T00:53:27.865457265Z" level=warning msg="cleaning up after shim disconnected" id=9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b namespace=k8s.io Aug 13 00:53:27.865751 env[1192]: time="2025-08-13T00:53:27.865473395Z" level=info msg="cleaning up dead shim" Aug 13 00:53:27.885128 env[1192]: time="2025-08-13T00:53:27.885050002Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:53:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3615 runtime=io.containerd.runc.v2\n" Aug 13 00:53:27.885783 env[1192]: time="2025-08-13T00:53:27.885652643Z" level=info msg="TearDown network for sandbox \"fe51e9d8c27238f37015dd6f19ab9f3b5473f6c3f85492fcfd09aa5bf5ca754c\" successfully" Aug 13 00:53:27.885783 env[1192]: time="2025-08-13T00:53:27.885703304Z" level=info msg="StopPodSandbox for \"fe51e9d8c27238f37015dd6f19ab9f3b5473f6c3f85492fcfd09aa5bf5ca754c\" returns successfully" Aug 13 00:53:27.891713 env[1192]: time="2025-08-13T00:53:27.891647885Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:53:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3616 runtime=io.containerd.runc.v2\n" Aug 13 00:53:27.893375 env[1192]: time="2025-08-13T00:53:27.892792901Z" level=info msg="TearDown network for sandbox \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\" successfully" Aug 13 00:53:27.893529 env[1192]: time="2025-08-13T00:53:27.893379289Z" level=info msg="StopPodSandbox for \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\" returns successfully" Aug 13 00:53:27.999387 kubelet[1897]: I0813 00:53:27.999171 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-cilium-run\") pod \"c7209e85-586c-43c8-99f2-e24879211658\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " Aug 13 00:53:27.999722 kubelet[1897]: I0813 00:53:27.999679 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-cni-path\") pod \"c7209e85-586c-43c8-99f2-e24879211658\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " Aug 13 00:53:27.999865 kubelet[1897]: I0813 00:53:27.999849 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-bpf-maps\") pod \"c7209e85-586c-43c8-99f2-e24879211658\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " Aug 13 00:53:27.999984 kubelet[1897]: I0813 00:53:27.999966 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9e56dd0-3118-4ddd-a4c4-488d58549fcf-cilium-config-path\") pod \"b9e56dd0-3118-4ddd-a4c4-488d58549fcf\" (UID: \"b9e56dd0-3118-4ddd-a4c4-488d58549fcf\") " Aug 13 00:53:28.000138 kubelet[1897]: I0813 00:53:28.000115 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-lib-modules\") pod \"c7209e85-586c-43c8-99f2-e24879211658\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " Aug 13 00:53:28.000352 kubelet[1897]: I0813 00:53:28.000328 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-host-proc-sys-net\") pod \"c7209e85-586c-43c8-99f2-e24879211658\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " Aug 13 00:53:28.000541 kubelet[1897]: I0813 00:53:28.000508 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7209e85-586c-43c8-99f2-e24879211658-cilium-config-path\") pod \"c7209e85-586c-43c8-99f2-e24879211658\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " Aug 13 00:53:28.000718 kubelet[1897]: I0813 00:53:28.000695 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-cilium-cgroup\") pod \"c7209e85-586c-43c8-99f2-e24879211658\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " Aug 13 00:53:28.001231 kubelet[1897]: I0813 00:53:28.001206 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-etc-cni-netd\") pod \"c7209e85-586c-43c8-99f2-e24879211658\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " Aug 13 00:53:28.001490 kubelet[1897]: I0813 00:53:28.001454 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7209e85-586c-43c8-99f2-e24879211658-clustermesh-secrets\") pod \"c7209e85-586c-43c8-99f2-e24879211658\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " Aug 13 00:53:28.001798 kubelet[1897]: I0813 00:53:28.001761 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-xtables-lock\") pod \"c7209e85-586c-43c8-99f2-e24879211658\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " Aug 13 00:53:28.001963 kubelet[1897]: I0813 00:53:28.001943 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-hostproc\") pod \"c7209e85-586c-43c8-99f2-e24879211658\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " Aug 13 00:53:28.002127 kubelet[1897]: I0813 00:53:28.002106 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7209e85-586c-43c8-99f2-e24879211658-hubble-tls\") pod \"c7209e85-586c-43c8-99f2-e24879211658\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " Aug 13 00:53:28.002399 kubelet[1897]: I0813 00:53:28.002343 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59xm7\" (UniqueName: \"kubernetes.io/projected/c7209e85-586c-43c8-99f2-e24879211658-kube-api-access-59xm7\") pod \"c7209e85-586c-43c8-99f2-e24879211658\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " Aug 13 00:53:28.002558 kubelet[1897]: I0813 00:53:27.999358 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c7209e85-586c-43c8-99f2-e24879211658" (UID: "c7209e85-586c-43c8-99f2-e24879211658"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:53:28.002733 kubelet[1897]: I0813 00:53:28.002709 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-host-proc-sys-kernel\") pod \"c7209e85-586c-43c8-99f2-e24879211658\" (UID: \"c7209e85-586c-43c8-99f2-e24879211658\") " Aug 13 00:53:28.002898 kubelet[1897]: I0813 00:53:28.002875 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crslq\" (UniqueName: \"kubernetes.io/projected/b9e56dd0-3118-4ddd-a4c4-488d58549fcf-kube-api-access-crslq\") pod \"b9e56dd0-3118-4ddd-a4c4-488d58549fcf\" (UID: \"b9e56dd0-3118-4ddd-a4c4-488d58549fcf\") " Aug 13 00:53:28.010464 kubelet[1897]: I0813 00:53:28.009766 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7209e85-586c-43c8-99f2-e24879211658-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c7209e85-586c-43c8-99f2-e24879211658" (UID: "c7209e85-586c-43c8-99f2-e24879211658"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:53:28.010464 kubelet[1897]: I0813 00:53:28.009947 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-cni-path" (OuterVolumeSpecName: "cni-path") pod "c7209e85-586c-43c8-99f2-e24879211658" (UID: "c7209e85-586c-43c8-99f2-e24879211658"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:53:28.010464 kubelet[1897]: I0813 00:53:28.009988 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c7209e85-586c-43c8-99f2-e24879211658" (UID: "c7209e85-586c-43c8-99f2-e24879211658"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:53:28.012251 kubelet[1897]: I0813 00:53:28.012058 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c7209e85-586c-43c8-99f2-e24879211658" (UID: "c7209e85-586c-43c8-99f2-e24879211658"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:53:28.012589 kubelet[1897]: I0813 00:53:28.012553 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c7209e85-586c-43c8-99f2-e24879211658" (UID: "c7209e85-586c-43c8-99f2-e24879211658"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:53:28.013255 kubelet[1897]: I0813 00:53:28.013202 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9e56dd0-3118-4ddd-a4c4-488d58549fcf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b9e56dd0-3118-4ddd-a4c4-488d58549fcf" (UID: "b9e56dd0-3118-4ddd-a4c4-488d58549fcf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:53:28.013407 kubelet[1897]: I0813 00:53:28.013311 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c7209e85-586c-43c8-99f2-e24879211658" (UID: "c7209e85-586c-43c8-99f2-e24879211658"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:53:28.013407 kubelet[1897]: I0813 00:53:28.013342 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c7209e85-586c-43c8-99f2-e24879211658" (UID: "c7209e85-586c-43c8-99f2-e24879211658"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:53:28.021359 kubelet[1897]: I0813 00:53:28.021194 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9e56dd0-3118-4ddd-a4c4-488d58549fcf-kube-api-access-crslq" (OuterVolumeSpecName: "kube-api-access-crslq") pod "b9e56dd0-3118-4ddd-a4c4-488d58549fcf" (UID: "b9e56dd0-3118-4ddd-a4c4-488d58549fcf"). InnerVolumeSpecName "kube-api-access-crslq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:53:28.021852 kubelet[1897]: I0813 00:53:28.021779 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7209e85-586c-43c8-99f2-e24879211658-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c7209e85-586c-43c8-99f2-e24879211658" (UID: "c7209e85-586c-43c8-99f2-e24879211658"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:53:28.026678 kubelet[1897]: I0813 00:53:28.026606 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7209e85-586c-43c8-99f2-e24879211658-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c7209e85-586c-43c8-99f2-e24879211658" (UID: "c7209e85-586c-43c8-99f2-e24879211658"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:53:28.026887 kubelet[1897]: I0813 00:53:28.026719 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c7209e85-586c-43c8-99f2-e24879211658" (UID: "c7209e85-586c-43c8-99f2-e24879211658"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:53:28.026887 kubelet[1897]: I0813 00:53:28.026758 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-hostproc" (OuterVolumeSpecName: "hostproc") pod "c7209e85-586c-43c8-99f2-e24879211658" (UID: "c7209e85-586c-43c8-99f2-e24879211658"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:53:28.026887 kubelet[1897]: I0813 00:53:28.026795 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c7209e85-586c-43c8-99f2-e24879211658" (UID: "c7209e85-586c-43c8-99f2-e24879211658"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:53:28.027674 kubelet[1897]: I0813 00:53:28.027620 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7209e85-586c-43c8-99f2-e24879211658-kube-api-access-59xm7" (OuterVolumeSpecName: "kube-api-access-59xm7") pod "c7209e85-586c-43c8-99f2-e24879211658" (UID: "c7209e85-586c-43c8-99f2-e24879211658"). InnerVolumeSpecName "kube-api-access-59xm7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:53:28.040396 kubelet[1897]: I0813 00:53:28.040176 1897 scope.go:117] "RemoveContainer" containerID="164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea" Aug 13 00:53:28.042867 systemd[1]: Removed slice kubepods-burstable-podc7209e85_586c_43c8_99f2_e24879211658.slice. Aug 13 00:53:28.043001 systemd[1]: kubepods-burstable-podc7209e85_586c_43c8_99f2_e24879211658.slice: Consumed 10.386s CPU time. Aug 13 00:53:28.061312 env[1192]: time="2025-08-13T00:53:28.061085256Z" level=info msg="RemoveContainer for \"164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea\"" Aug 13 00:53:28.072177 systemd[1]: Removed slice kubepods-besteffort-podb9e56dd0_3118_4ddd_a4c4_488d58549fcf.slice. Aug 13 00:53:28.077511 env[1192]: time="2025-08-13T00:53:28.077290001Z" level=info msg="RemoveContainer for \"164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea\" returns successfully" Aug 13 00:53:28.079372 kubelet[1897]: I0813 00:53:28.079325 1897 scope.go:117] "RemoveContainer" containerID="526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb" Aug 13 00:53:28.083567 env[1192]: time="2025-08-13T00:53:28.083430064Z" level=info msg="RemoveContainer for \"526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb\"" Aug 13 00:53:28.090898 env[1192]: time="2025-08-13T00:53:28.090810099Z" level=info msg="RemoveContainer for \"526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb\" returns successfully" Aug 13 00:53:28.092907 kubelet[1897]: I0813 00:53:28.092857 1897 scope.go:117] "RemoveContainer" containerID="1ce38a4a8d996d27f0dc1549b1cca4eb4f42f8c4b07274ab0d740a81fa1e52c0" Aug 13 00:53:28.096018 env[1192]: time="2025-08-13T00:53:28.095954271Z" level=info msg="RemoveContainer for \"1ce38a4a8d996d27f0dc1549b1cca4eb4f42f8c4b07274ab0d740a81fa1e52c0\"" Aug 13 00:53:28.104312 kubelet[1897]: I0813 00:53:28.104152 1897 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-hostproc\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:28.104312 kubelet[1897]: I0813 00:53:28.104217 1897 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7209e85-586c-43c8-99f2-e24879211658-hubble-tls\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:28.104312 kubelet[1897]: I0813 00:53:28.104238 1897 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-59xm7\" (UniqueName: \"kubernetes.io/projected/c7209e85-586c-43c8-99f2-e24879211658-kube-api-access-59xm7\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:28.104312 kubelet[1897]: I0813 00:53:28.104267 1897 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-host-proc-sys-kernel\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:28.104312 kubelet[1897]: I0813 00:53:28.104322 1897 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-crslq\" (UniqueName: \"kubernetes.io/projected/b9e56dd0-3118-4ddd-a4c4-488d58549fcf-kube-api-access-crslq\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:28.104782 kubelet[1897]: I0813 00:53:28.104351 1897 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-cilium-run\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:28.104782 kubelet[1897]: I0813 00:53:28.104366 1897 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-cni-path\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:28.104782 kubelet[1897]: I0813 00:53:28.104378 1897 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-bpf-maps\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:28.104782 kubelet[1897]: I0813 00:53:28.104390 1897 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9e56dd0-3118-4ddd-a4c4-488d58549fcf-cilium-config-path\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:28.104782 kubelet[1897]: I0813 00:53:28.104406 1897 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-lib-modules\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:28.104782 kubelet[1897]: I0813 00:53:28.104436 1897 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-host-proc-sys-net\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:28.104782 kubelet[1897]: I0813 00:53:28.104449 1897 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7209e85-586c-43c8-99f2-e24879211658-cilium-config-path\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:28.104782 kubelet[1897]: I0813 00:53:28.104463 1897 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-cilium-cgroup\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:28.105181 kubelet[1897]: I0813 00:53:28.104479 1897 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-etc-cni-netd\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:28.105181 kubelet[1897]: I0813 00:53:28.104507 1897 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7209e85-586c-43c8-99f2-e24879211658-clustermesh-secrets\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:28.105181 kubelet[1897]: I0813 00:53:28.104522 1897 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7209e85-586c-43c8-99f2-e24879211658-xtables-lock\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:28.106033 env[1192]: time="2025-08-13T00:53:28.105950223Z" level=info msg="RemoveContainer for \"1ce38a4a8d996d27f0dc1549b1cca4eb4f42f8c4b07274ab0d740a81fa1e52c0\" returns successfully" Aug 13 00:53:28.106681 kubelet[1897]: I0813 00:53:28.106644 1897 scope.go:117] "RemoveContainer" containerID="07b0729277eeec91ce2368566da6b95eb76381a3c11376656556fe576d35c53a" Aug 13 00:53:28.109891 env[1192]: time="2025-08-13T00:53:28.109811046Z" level=info msg="RemoveContainer for \"07b0729277eeec91ce2368566da6b95eb76381a3c11376656556fe576d35c53a\"" Aug 13 00:53:28.114963 env[1192]: time="2025-08-13T00:53:28.114868762Z" level=info msg="RemoveContainer for \"07b0729277eeec91ce2368566da6b95eb76381a3c11376656556fe576d35c53a\" returns successfully" Aug 13 00:53:28.115782 kubelet[1897]: I0813 00:53:28.115732 1897 scope.go:117] "RemoveContainer" containerID="b13e687e0d4b7377ee576c4f74ffc93d55515ef987ab5bd68dceb443010d67f9" Aug 13 00:53:28.118528 env[1192]: time="2025-08-13T00:53:28.118468862Z" level=info msg="RemoveContainer for \"b13e687e0d4b7377ee576c4f74ffc93d55515ef987ab5bd68dceb443010d67f9\"" Aug 13 00:53:28.123474 env[1192]: time="2025-08-13T00:53:28.123394540Z" level=info msg="RemoveContainer for \"b13e687e0d4b7377ee576c4f74ffc93d55515ef987ab5bd68dceb443010d67f9\" returns successfully" Aug 13 00:53:28.123824 kubelet[1897]: I0813 00:53:28.123782 1897 scope.go:117] "RemoveContainer" containerID="164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea" Aug 13 00:53:28.124412 env[1192]: time="2025-08-13T00:53:28.124243102Z" level=error msg="ContainerStatus for \"164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea\": not found" Aug 13 00:53:28.124730 kubelet[1897]: E0813 00:53:28.124688 1897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea\": not found" containerID="164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea" Aug 13 00:53:28.128931 kubelet[1897]: I0813 00:53:28.127719 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea"} err="failed to get container status \"164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"164f6f08d15b990270940a607ae161857257af89313ee35f332ac16cd0fcb1ea\": not found" Aug 13 00:53:28.128931 kubelet[1897]: I0813 00:53:28.128939 1897 scope.go:117] "RemoveContainer" containerID="526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb" Aug 13 00:53:28.130107 env[1192]: time="2025-08-13T00:53:28.129967077Z" level=error msg="ContainerStatus for \"526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb\": not found" Aug 13 00:53:28.130727 kubelet[1897]: E0813 00:53:28.130674 1897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb\": not found" containerID="526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb" Aug 13 00:53:28.130962 kubelet[1897]: I0813 00:53:28.130742 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb"} err="failed to get container status \"526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb\": rpc error: code = NotFound desc = an error occurred when try to find container \"526403d00cacb829cb44d1ff0ed7f9bdfd405d9152544d23e771cd33594d6ccb\": not found" Aug 13 00:53:28.130962 kubelet[1897]: I0813 00:53:28.130782 1897 scope.go:117] "RemoveContainer" containerID="1ce38a4a8d996d27f0dc1549b1cca4eb4f42f8c4b07274ab0d740a81fa1e52c0" Aug 13 00:53:28.131260 env[1192]: time="2025-08-13T00:53:28.131160898Z" level=error msg="ContainerStatus for \"1ce38a4a8d996d27f0dc1549b1cca4eb4f42f8c4b07274ab0d740a81fa1e52c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ce38a4a8d996d27f0dc1549b1cca4eb4f42f8c4b07274ab0d740a81fa1e52c0\": not found" Aug 13 00:53:28.132759 kubelet[1897]: E0813 00:53:28.132495 1897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ce38a4a8d996d27f0dc1549b1cca4eb4f42f8c4b07274ab0d740a81fa1e52c0\": not found" containerID="1ce38a4a8d996d27f0dc1549b1cca4eb4f42f8c4b07274ab0d740a81fa1e52c0" Aug 13 00:53:28.132759 kubelet[1897]: I0813 00:53:28.132546 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ce38a4a8d996d27f0dc1549b1cca4eb4f42f8c4b07274ab0d740a81fa1e52c0"} err="failed to get container status \"1ce38a4a8d996d27f0dc1549b1cca4eb4f42f8c4b07274ab0d740a81fa1e52c0\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ce38a4a8d996d27f0dc1549b1cca4eb4f42f8c4b07274ab0d740a81fa1e52c0\": not found" Aug 13 00:53:28.132759 kubelet[1897]: I0813 00:53:28.132582 1897 scope.go:117] "RemoveContainer" containerID="07b0729277eeec91ce2368566da6b95eb76381a3c11376656556fe576d35c53a" Aug 13 00:53:28.133550 env[1192]: time="2025-08-13T00:53:28.133454689Z" level=error msg="ContainerStatus for \"07b0729277eeec91ce2368566da6b95eb76381a3c11376656556fe576d35c53a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"07b0729277eeec91ce2368566da6b95eb76381a3c11376656556fe576d35c53a\": not found" Aug 13 00:53:28.134434 kubelet[1897]: E0813 00:53:28.134168 1897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"07b0729277eeec91ce2368566da6b95eb76381a3c11376656556fe576d35c53a\": not found" containerID="07b0729277eeec91ce2368566da6b95eb76381a3c11376656556fe576d35c53a" Aug 13 00:53:28.134434 kubelet[1897]: I0813 00:53:28.134224 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"07b0729277eeec91ce2368566da6b95eb76381a3c11376656556fe576d35c53a"} err="failed to get container status \"07b0729277eeec91ce2368566da6b95eb76381a3c11376656556fe576d35c53a\": rpc error: code = NotFound desc = an error occurred when try to find container \"07b0729277eeec91ce2368566da6b95eb76381a3c11376656556fe576d35c53a\": not found" Aug 13 00:53:28.134434 kubelet[1897]: I0813 00:53:28.134258 1897 scope.go:117] "RemoveContainer" containerID="b13e687e0d4b7377ee576c4f74ffc93d55515ef987ab5bd68dceb443010d67f9" Aug 13 00:53:28.135246 env[1192]: time="2025-08-13T00:53:28.135125393Z" level=error msg="ContainerStatus for \"b13e687e0d4b7377ee576c4f74ffc93d55515ef987ab5bd68dceb443010d67f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b13e687e0d4b7377ee576c4f74ffc93d55515ef987ab5bd68dceb443010d67f9\": not found" Aug 13 00:53:28.135964 kubelet[1897]: E0813 00:53:28.135836 1897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b13e687e0d4b7377ee576c4f74ffc93d55515ef987ab5bd68dceb443010d67f9\": not found" containerID="b13e687e0d4b7377ee576c4f74ffc93d55515ef987ab5bd68dceb443010d67f9" Aug 13 00:53:28.136226 kubelet[1897]: I0813 00:53:28.135913 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b13e687e0d4b7377ee576c4f74ffc93d55515ef987ab5bd68dceb443010d67f9"} err="failed to get container status \"b13e687e0d4b7377ee576c4f74ffc93d55515ef987ab5bd68dceb443010d67f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"b13e687e0d4b7377ee576c4f74ffc93d55515ef987ab5bd68dceb443010d67f9\": not found" Aug 13 00:53:28.136226 kubelet[1897]: I0813 00:53:28.136170 1897 scope.go:117] "RemoveContainer" containerID="147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578" Aug 13 00:53:28.140772 env[1192]: time="2025-08-13T00:53:28.140667952Z" level=info msg="RemoveContainer for \"147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578\"" Aug 13 00:53:28.148597 env[1192]: time="2025-08-13T00:53:28.148516228Z" level=info msg="RemoveContainer for \"147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578\" returns successfully" Aug 13 00:53:28.149069 kubelet[1897]: I0813 00:53:28.149032 1897 scope.go:117] "RemoveContainer" containerID="147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578" Aug 13 00:53:28.149700 env[1192]: time="2025-08-13T00:53:28.149516613Z" level=error msg="ContainerStatus for \"147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578\": not found" Aug 13 00:53:28.150050 kubelet[1897]: E0813 00:53:28.149937 1897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578\": not found" containerID="147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578" Aug 13 00:53:28.150050 kubelet[1897]: I0813 00:53:28.149974 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578"} err="failed to get container status \"147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578\": rpc error: code = NotFound desc = an error occurred when try to find container \"147c8eb39316e6b9c3399e152147e90887dca53c4b2de4855ccb7acdab561578\": not found" Aug 13 00:53:28.507434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe51e9d8c27238f37015dd6f19ab9f3b5473f6c3f85492fcfd09aa5bf5ca754c-rootfs.mount: Deactivated successfully. Aug 13 00:53:28.508019 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe51e9d8c27238f37015dd6f19ab9f3b5473f6c3f85492fcfd09aa5bf5ca754c-shm.mount: Deactivated successfully. Aug 13 00:53:28.508451 systemd[1]: var-lib-kubelet-pods-b9e56dd0\x2d3118\x2d4ddd\x2da4c4\x2d488d58549fcf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcrslq.mount: Deactivated successfully. Aug 13 00:53:28.508761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b-rootfs.mount: Deactivated successfully. Aug 13 00:53:28.509018 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b-shm.mount: Deactivated successfully. Aug 13 00:53:28.509314 systemd[1]: var-lib-kubelet-pods-c7209e85\x2d586c\x2d43c8\x2d99f2\x2de24879211658-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d59xm7.mount: Deactivated successfully. Aug 13 00:53:28.509622 systemd[1]: var-lib-kubelet-pods-c7209e85\x2d586c\x2d43c8\x2d99f2\x2de24879211658-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:53:28.509898 systemd[1]: var-lib-kubelet-pods-c7209e85\x2d586c\x2d43c8\x2d99f2\x2de24879211658-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:53:28.597722 kubelet[1897]: I0813 00:53:28.597549 1897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9e56dd0-3118-4ddd-a4c4-488d58549fcf" path="/var/lib/kubelet/pods/b9e56dd0-3118-4ddd-a4c4-488d58549fcf/volumes" Aug 13 00:53:28.599167 kubelet[1897]: I0813 00:53:28.599079 1897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7209e85-586c-43c8-99f2-e24879211658" path="/var/lib/kubelet/pods/c7209e85-586c-43c8-99f2-e24879211658/volumes" Aug 13 00:53:29.375211 sshd[3485]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:29.374473 systemd[1]: Started sshd@24-143.198.60.143:22-139.178.68.195:60686.service. Aug 13 00:53:29.381428 systemd-logind[1175]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:53:29.381872 systemd[1]: sshd@23-143.198.60.143:22-139.178.68.195:60678.service: Deactivated successfully. Aug 13 00:53:29.382932 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:53:29.384091 systemd-logind[1175]: Removed session 24. Aug 13 00:53:29.441389 sshd[3645]: Accepted publickey for core from 139.178.68.195 port 60686 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:29.443876 sshd[3645]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:29.454501 systemd-logind[1175]: New session 25 of user core. Aug 13 00:53:29.455533 systemd[1]: Started session-25.scope. Aug 13 00:53:29.613547 kubelet[1897]: E0813 00:53:29.613472 1897 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:53:30.265224 sshd[3645]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:30.274418 systemd[1]: Started sshd@25-143.198.60.143:22-139.178.68.195:60080.service. Aug 13 00:53:30.281720 systemd[1]: sshd@24-143.198.60.143:22-139.178.68.195:60686.service: Deactivated successfully. Aug 13 00:53:30.285874 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:53:30.287028 systemd-logind[1175]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:53:30.289359 systemd-logind[1175]: Removed session 25. Aug 13 00:53:30.341216 sshd[3655]: Accepted publickey for core from 139.178.68.195 port 60080 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:30.345648 sshd[3655]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:30.364438 systemd-logind[1175]: New session 26 of user core. Aug 13 00:53:30.366948 systemd[1]: Started session-26.scope. Aug 13 00:53:30.372592 systemd[1]: Created slice kubepods-burstable-pod74a6494a_8623_4c74_9278_4787bbdf7313.slice. Aug 13 00:53:30.431400 kubelet[1897]: I0813 00:53:30.431331 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-host-proc-sys-kernel\") pod \"cilium-xpzb8\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " pod="kube-system/cilium-xpzb8" Aug 13 00:53:30.431817 kubelet[1897]: I0813 00:53:30.431777 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74a6494a-8623-4c74-9278-4787bbdf7313-hubble-tls\") pod \"cilium-xpzb8\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " pod="kube-system/cilium-xpzb8" Aug 13 00:53:30.432076 kubelet[1897]: I0813 00:53:30.432048 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-hostproc\") pod \"cilium-xpzb8\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " pod="kube-system/cilium-xpzb8" Aug 13 00:53:30.432250 kubelet[1897]: I0813 00:53:30.432224 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-cilium-cgroup\") pod \"cilium-xpzb8\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " pod="kube-system/cilium-xpzb8" Aug 13 00:53:30.432440 kubelet[1897]: I0813 00:53:30.432414 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-xtables-lock\") pod \"cilium-xpzb8\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " pod="kube-system/cilium-xpzb8" Aug 13 00:53:30.432627 kubelet[1897]: I0813 00:53:30.432599 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-host-proc-sys-net\") pod \"cilium-xpzb8\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " pod="kube-system/cilium-xpzb8" Aug 13 00:53:30.432796 kubelet[1897]: I0813 00:53:30.432766 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74a6494a-8623-4c74-9278-4787bbdf7313-cilium-config-path\") pod \"cilium-xpzb8\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " pod="kube-system/cilium-xpzb8" Aug 13 00:53:30.432957 kubelet[1897]: I0813 00:53:30.432932 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-lib-modules\") pod \"cilium-xpzb8\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " pod="kube-system/cilium-xpzb8" Aug 13 00:53:30.433120 kubelet[1897]: I0813 00:53:30.433090 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-cni-path\") pod \"cilium-xpzb8\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " pod="kube-system/cilium-xpzb8" Aug 13 00:53:30.433302 kubelet[1897]: I0813 00:53:30.433260 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-cilium-run\") pod \"cilium-xpzb8\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " pod="kube-system/cilium-xpzb8" Aug 13 00:53:30.433492 kubelet[1897]: I0813 00:53:30.433466 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bggvw\" (UniqueName: \"kubernetes.io/projected/74a6494a-8623-4c74-9278-4787bbdf7313-kube-api-access-bggvw\") pod \"cilium-xpzb8\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " pod="kube-system/cilium-xpzb8" Aug 13 00:53:30.433670 kubelet[1897]: I0813 00:53:30.433645 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-bpf-maps\") pod \"cilium-xpzb8\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " pod="kube-system/cilium-xpzb8" Aug 13 00:53:30.433859 kubelet[1897]: I0813 00:53:30.433825 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-etc-cni-netd\") pod \"cilium-xpzb8\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " pod="kube-system/cilium-xpzb8" Aug 13 00:53:30.434039 kubelet[1897]: I0813 00:53:30.434005 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74a6494a-8623-4c74-9278-4787bbdf7313-clustermesh-secrets\") pod \"cilium-xpzb8\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " pod="kube-system/cilium-xpzb8" Aug 13 00:53:30.434196 kubelet[1897]: I0813 00:53:30.434173 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/74a6494a-8623-4c74-9278-4787bbdf7313-cilium-ipsec-secrets\") pod \"cilium-xpzb8\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " pod="kube-system/cilium-xpzb8" Aug 13 00:53:30.678185 kubelet[1897]: E0813 00:53:30.677986 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:30.682351 env[1192]: time="2025-08-13T00:53:30.681419172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xpzb8,Uid:74a6494a-8623-4c74-9278-4787bbdf7313,Namespace:kube-system,Attempt:0,}" Aug 13 00:53:30.687969 sshd[3655]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:30.702996 systemd[1]: Started sshd@26-143.198.60.143:22-139.178.68.195:60090.service. Aug 13 00:53:30.715821 systemd[1]: sshd@25-143.198.60.143:22-139.178.68.195:60080.service: Deactivated successfully. Aug 13 00:53:30.717061 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:53:30.719913 systemd-logind[1175]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:53:30.726537 systemd-logind[1175]: Removed session 26. Aug 13 00:53:30.732590 env[1192]: time="2025-08-13T00:53:30.732331348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:53:30.733010 env[1192]: time="2025-08-13T00:53:30.732935078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:53:30.733225 env[1192]: time="2025-08-13T00:53:30.733174118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:53:30.734095 env[1192]: time="2025-08-13T00:53:30.734021069Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e11bbd0c59308becd7525d982fd1de73b9707c62abe5968f041be3890494c9db pid=3679 runtime=io.containerd.runc.v2 Aug 13 00:53:30.783230 systemd[1]: Started cri-containerd-e11bbd0c59308becd7525d982fd1de73b9707c62abe5968f041be3890494c9db.scope. Aug 13 00:53:30.800619 sshd[3672]: Accepted publickey for core from 139.178.68.195 port 60090 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:30.803578 sshd[3672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:30.816480 systemd-logind[1175]: New session 27 of user core. Aug 13 00:53:30.821652 systemd[1]: Started session-27.scope. Aug 13 00:53:30.874553 env[1192]: time="2025-08-13T00:53:30.874484819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xpzb8,Uid:74a6494a-8623-4c74-9278-4787bbdf7313,Namespace:kube-system,Attempt:0,} returns sandbox id \"e11bbd0c59308becd7525d982fd1de73b9707c62abe5968f041be3890494c9db\"" Aug 13 00:53:30.876874 kubelet[1897]: E0813 00:53:30.876433 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:30.893400 env[1192]: time="2025-08-13T00:53:30.893264134Z" level=info msg="CreateContainer within sandbox \"e11bbd0c59308becd7525d982fd1de73b9707c62abe5968f041be3890494c9db\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:53:30.924407 env[1192]: time="2025-08-13T00:53:30.924062715Z" level=info msg="CreateContainer within sandbox \"e11bbd0c59308becd7525d982fd1de73b9707c62abe5968f041be3890494c9db\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ccc863bce289536e38c93e9f6cc154794d2ba2af1ac03071cbe0c6642d12d424\"" Aug 13 00:53:30.931620 env[1192]: time="2025-08-13T00:53:30.928808178Z" level=info msg="StartContainer for \"ccc863bce289536e38c93e9f6cc154794d2ba2af1ac03071cbe0c6642d12d424\"" Aug 13 00:53:30.959623 systemd[1]: Started cri-containerd-ccc863bce289536e38c93e9f6cc154794d2ba2af1ac03071cbe0c6642d12d424.scope. Aug 13 00:53:31.001545 systemd[1]: cri-containerd-ccc863bce289536e38c93e9f6cc154794d2ba2af1ac03071cbe0c6642d12d424.scope: Deactivated successfully. Aug 13 00:53:31.022688 env[1192]: time="2025-08-13T00:53:31.022611254Z" level=info msg="shim disconnected" id=ccc863bce289536e38c93e9f6cc154794d2ba2af1ac03071cbe0c6642d12d424 Aug 13 00:53:31.023456 env[1192]: time="2025-08-13T00:53:31.023401211Z" level=warning msg="cleaning up after shim disconnected" id=ccc863bce289536e38c93e9f6cc154794d2ba2af1ac03071cbe0c6642d12d424 namespace=k8s.io Aug 13 00:53:31.023770 env[1192]: time="2025-08-13T00:53:31.023691091Z" level=info msg="cleaning up dead shim" Aug 13 00:53:31.055060 env[1192]: time="2025-08-13T00:53:31.054981181Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:53:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3747 runtime=io.containerd.runc.v2\ntime=\"2025-08-13T00:53:31Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ccc863bce289536e38c93e9f6cc154794d2ba2af1ac03071cbe0c6642d12d424/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Aug 13 00:53:31.055604 env[1192]: time="2025-08-13T00:53:31.055434184Z" level=error msg="copy shim log" error="read /proc/self/fd/37: file already closed" Aug 13 00:53:31.056463 env[1192]: time="2025-08-13T00:53:31.056385697Z" level=error msg="Failed to pipe stdout of container \"ccc863bce289536e38c93e9f6cc154794d2ba2af1ac03071cbe0c6642d12d424\"" error="reading from a closed fifo" Aug 13 00:53:31.058552 env[1192]: time="2025-08-13T00:53:31.058451953Z" level=error msg="Failed to pipe stderr of container \"ccc863bce289536e38c93e9f6cc154794d2ba2af1ac03071cbe0c6642d12d424\"" error="reading from a closed fifo" Aug 13 00:53:31.063318 env[1192]: time="2025-08-13T00:53:31.061701008Z" level=error msg="StartContainer for \"ccc863bce289536e38c93e9f6cc154794d2ba2af1ac03071cbe0c6642d12d424\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Aug 13 00:53:31.068001 kubelet[1897]: E0813 00:53:31.067920 1897 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ccc863bce289536e38c93e9f6cc154794d2ba2af1ac03071cbe0c6642d12d424" Aug 13 00:53:31.073614 kubelet[1897]: E0813 00:53:31.073462 1897 kuberuntime_manager.go:1358] "Unhandled Error" err=< Aug 13 00:53:31.073614 kubelet[1897]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Aug 13 00:53:31.073614 kubelet[1897]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Aug 13 00:53:31.073614 kubelet[1897]: rm /hostbin/cilium-mount Aug 13 00:53:31.074076 kubelet[1897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bggvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-xpzb8_kube-system(74a6494a-8623-4c74-9278-4787bbdf7313): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Aug 13 00:53:31.074076 kubelet[1897]: > logger="UnhandledError" Aug 13 00:53:31.075814 kubelet[1897]: E0813 00:53:31.075510 1897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-xpzb8" podUID="74a6494a-8623-4c74-9278-4787bbdf7313" Aug 13 00:53:31.083722 env[1192]: time="2025-08-13T00:53:31.083522305Z" level=info msg="StopPodSandbox for \"e11bbd0c59308becd7525d982fd1de73b9707c62abe5968f041be3890494c9db\"" Aug 13 00:53:31.084045 env[1192]: time="2025-08-13T00:53:31.083875953Z" level=info msg="Container to stop \"ccc863bce289536e38c93e9f6cc154794d2ba2af1ac03071cbe0c6642d12d424\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:53:31.106900 systemd[1]: cri-containerd-e11bbd0c59308becd7525d982fd1de73b9707c62abe5968f041be3890494c9db.scope: Deactivated successfully. Aug 13 00:53:31.163849 env[1192]: time="2025-08-13T00:53:31.163508225Z" level=info msg="shim disconnected" id=e11bbd0c59308becd7525d982fd1de73b9707c62abe5968f041be3890494c9db Aug 13 00:53:31.163849 env[1192]: time="2025-08-13T00:53:31.163584601Z" level=warning msg="cleaning up after shim disconnected" id=e11bbd0c59308becd7525d982fd1de73b9707c62abe5968f041be3890494c9db namespace=k8s.io Aug 13 00:53:31.163849 env[1192]: time="2025-08-13T00:53:31.163597230Z" level=info msg="cleaning up dead shim" Aug 13 00:53:31.178966 env[1192]: time="2025-08-13T00:53:31.178891085Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:53:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3779 runtime=io.containerd.runc.v2\n" Aug 13 00:53:31.179500 env[1192]: time="2025-08-13T00:53:31.179431673Z" level=info msg="TearDown network for sandbox \"e11bbd0c59308becd7525d982fd1de73b9707c62abe5968f041be3890494c9db\" successfully" Aug 13 00:53:31.179500 env[1192]: time="2025-08-13T00:53:31.179474016Z" level=info msg="StopPodSandbox for \"e11bbd0c59308becd7525d982fd1de73b9707c62abe5968f041be3890494c9db\" returns successfully" Aug 13 00:53:31.251951 kubelet[1897]: I0813 00:53:31.251868 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-bpf-maps\") pod \"74a6494a-8623-4c74-9278-4787bbdf7313\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " Aug 13 00:53:31.251951 kubelet[1897]: I0813 00:53:31.251963 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74a6494a-8623-4c74-9278-4787bbdf7313-clustermesh-secrets\") pod \"74a6494a-8623-4c74-9278-4787bbdf7313\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " Aug 13 00:53:31.252399 kubelet[1897]: I0813 00:53:31.251984 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/74a6494a-8623-4c74-9278-4787bbdf7313-cilium-ipsec-secrets\") pod \"74a6494a-8623-4c74-9278-4787bbdf7313\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " Aug 13 00:53:31.252399 kubelet[1897]: I0813 00:53:31.252001 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-host-proc-sys-net\") pod \"74a6494a-8623-4c74-9278-4787bbdf7313\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " Aug 13 00:53:31.252399 kubelet[1897]: I0813 00:53:31.252019 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-etc-cni-netd\") pod \"74a6494a-8623-4c74-9278-4787bbdf7313\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " Aug 13 00:53:31.252399 kubelet[1897]: I0813 00:53:31.252044 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74a6494a-8623-4c74-9278-4787bbdf7313-cilium-config-path\") pod \"74a6494a-8623-4c74-9278-4787bbdf7313\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " Aug 13 00:53:31.252399 kubelet[1897]: I0813 00:53:31.252059 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-cilium-run\") pod \"74a6494a-8623-4c74-9278-4787bbdf7313\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " Aug 13 00:53:31.252399 kubelet[1897]: I0813 00:53:31.252078 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-host-proc-sys-kernel\") pod \"74a6494a-8623-4c74-9278-4787bbdf7313\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " Aug 13 00:53:31.252399 kubelet[1897]: I0813 00:53:31.252099 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-lib-modules\") pod \"74a6494a-8623-4c74-9278-4787bbdf7313\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " Aug 13 00:53:31.252399 kubelet[1897]: I0813 00:53:31.252127 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bggvw\" (UniqueName: \"kubernetes.io/projected/74a6494a-8623-4c74-9278-4787bbdf7313-kube-api-access-bggvw\") pod \"74a6494a-8623-4c74-9278-4787bbdf7313\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " Aug 13 00:53:31.252399 kubelet[1897]: I0813 00:53:31.252152 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74a6494a-8623-4c74-9278-4787bbdf7313-hubble-tls\") pod \"74a6494a-8623-4c74-9278-4787bbdf7313\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " Aug 13 00:53:31.252399 kubelet[1897]: I0813 00:53:31.252182 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-hostproc\") pod \"74a6494a-8623-4c74-9278-4787bbdf7313\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " Aug 13 00:53:31.252399 kubelet[1897]: I0813 00:53:31.252203 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-cilium-cgroup\") pod \"74a6494a-8623-4c74-9278-4787bbdf7313\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " Aug 13 00:53:31.252399 kubelet[1897]: I0813 00:53:31.252228 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-xtables-lock\") pod \"74a6494a-8623-4c74-9278-4787bbdf7313\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " Aug 13 00:53:31.252399 kubelet[1897]: I0813 00:53:31.252244 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-cni-path\") pod \"74a6494a-8623-4c74-9278-4787bbdf7313\" (UID: \"74a6494a-8623-4c74-9278-4787bbdf7313\") " Aug 13 00:53:31.252399 kubelet[1897]: I0813 00:53:31.252367 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-cni-path" (OuterVolumeSpecName: "cni-path") pod "74a6494a-8623-4c74-9278-4787bbdf7313" (UID: "74a6494a-8623-4c74-9278-4787bbdf7313"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:53:31.252399 kubelet[1897]: I0813 00:53:31.252403 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "74a6494a-8623-4c74-9278-4787bbdf7313" (UID: "74a6494a-8623-4c74-9278-4787bbdf7313"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:53:31.253424 kubelet[1897]: I0813 00:53:31.252807 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "74a6494a-8623-4c74-9278-4787bbdf7313" (UID: "74a6494a-8623-4c74-9278-4787bbdf7313"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:53:31.253887 kubelet[1897]: I0813 00:53:31.253814 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "74a6494a-8623-4c74-9278-4787bbdf7313" (UID: "74a6494a-8623-4c74-9278-4787bbdf7313"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:53:31.254951 kubelet[1897]: I0813 00:53:31.254894 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "74a6494a-8623-4c74-9278-4787bbdf7313" (UID: "74a6494a-8623-4c74-9278-4787bbdf7313"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:53:31.255115 kubelet[1897]: I0813 00:53:31.254971 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "74a6494a-8623-4c74-9278-4787bbdf7313" (UID: "74a6494a-8623-4c74-9278-4787bbdf7313"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:53:31.256543 kubelet[1897]: I0813 00:53:31.256494 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "74a6494a-8623-4c74-9278-4787bbdf7313" (UID: "74a6494a-8623-4c74-9278-4787bbdf7313"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:53:31.258061 kubelet[1897]: I0813 00:53:31.258012 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74a6494a-8623-4c74-9278-4787bbdf7313-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "74a6494a-8623-4c74-9278-4787bbdf7313" (UID: "74a6494a-8623-4c74-9278-4787bbdf7313"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:53:31.258213 kubelet[1897]: I0813 00:53:31.258101 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-hostproc" (OuterVolumeSpecName: "hostproc") pod "74a6494a-8623-4c74-9278-4787bbdf7313" (UID: "74a6494a-8623-4c74-9278-4787bbdf7313"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:53:31.258557 kubelet[1897]: I0813 00:53:31.258510 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "74a6494a-8623-4c74-9278-4787bbdf7313" (UID: "74a6494a-8623-4c74-9278-4787bbdf7313"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:53:31.258775 kubelet[1897]: I0813 00:53:31.258564 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "74a6494a-8623-4c74-9278-4787bbdf7313" (UID: "74a6494a-8623-4c74-9278-4787bbdf7313"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:53:31.262524 kubelet[1897]: I0813 00:53:31.262459 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a6494a-8623-4c74-9278-4787bbdf7313-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "74a6494a-8623-4c74-9278-4787bbdf7313" (UID: "74a6494a-8623-4c74-9278-4787bbdf7313"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:53:31.262728 kubelet[1897]: I0813 00:53:31.262600 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a6494a-8623-4c74-9278-4787bbdf7313-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "74a6494a-8623-4c74-9278-4787bbdf7313" (UID: "74a6494a-8623-4c74-9278-4787bbdf7313"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:53:31.266032 kubelet[1897]: I0813 00:53:31.265962 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74a6494a-8623-4c74-9278-4787bbdf7313-kube-api-access-bggvw" (OuterVolumeSpecName: "kube-api-access-bggvw") pod "74a6494a-8623-4c74-9278-4787bbdf7313" (UID: "74a6494a-8623-4c74-9278-4787bbdf7313"). InnerVolumeSpecName "kube-api-access-bggvw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:53:31.267900 kubelet[1897]: I0813 00:53:31.267698 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74a6494a-8623-4c74-9278-4787bbdf7313-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "74a6494a-8623-4c74-9278-4787bbdf7313" (UID: "74a6494a-8623-4c74-9278-4787bbdf7313"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:53:31.352830 kubelet[1897]: I0813 00:53:31.352760 1897 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74a6494a-8623-4c74-9278-4787bbdf7313-hubble-tls\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:31.352830 kubelet[1897]: I0813 00:53:31.352803 1897 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-hostproc\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:31.352830 kubelet[1897]: I0813 00:53:31.352816 1897 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-cilium-cgroup\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:31.352830 kubelet[1897]: I0813 00:53:31.352833 1897 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-xtables-lock\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:31.352830 kubelet[1897]: I0813 00:53:31.352845 1897 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-cni-path\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:31.352830 kubelet[1897]: I0813 00:53:31.352858 1897 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-bpf-maps\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:31.353432 kubelet[1897]: I0813 00:53:31.352872 1897 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74a6494a-8623-4c74-9278-4787bbdf7313-clustermesh-secrets\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:31.353432 kubelet[1897]: I0813 00:53:31.352883 1897 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/74a6494a-8623-4c74-9278-4787bbdf7313-cilium-ipsec-secrets\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:31.353432 kubelet[1897]: I0813 00:53:31.352935 1897 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-host-proc-sys-net\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:31.353432 kubelet[1897]: I0813 00:53:31.352950 1897 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-etc-cni-netd\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:31.353432 kubelet[1897]: I0813 00:53:31.352964 1897 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74a6494a-8623-4c74-9278-4787bbdf7313-cilium-config-path\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:31.353432 kubelet[1897]: I0813 00:53:31.352978 1897 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-cilium-run\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:31.353432 kubelet[1897]: I0813 00:53:31.352991 1897 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-host-proc-sys-kernel\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:31.353432 kubelet[1897]: I0813 00:53:31.353000 1897 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74a6494a-8623-4c74-9278-4787bbdf7313-lib-modules\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:31.353432 kubelet[1897]: I0813 00:53:31.353033 1897 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bggvw\" (UniqueName: \"kubernetes.io/projected/74a6494a-8623-4c74-9278-4787bbdf7313-kube-api-access-bggvw\") on node \"ci-3510.3.8-a-e4f4484119\" DevicePath \"\"" Aug 13 00:53:31.565157 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e11bbd0c59308becd7525d982fd1de73b9707c62abe5968f041be3890494c9db-shm.mount: Deactivated successfully. Aug 13 00:53:31.565365 systemd[1]: var-lib-kubelet-pods-74a6494a\x2d8623\x2d4c74\x2d9278\x2d4787bbdf7313-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbggvw.mount: Deactivated successfully. Aug 13 00:53:31.565513 systemd[1]: var-lib-kubelet-pods-74a6494a\x2d8623\x2d4c74\x2d9278\x2d4787bbdf7313-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:53:31.565633 systemd[1]: var-lib-kubelet-pods-74a6494a\x2d8623\x2d4c74\x2d9278\x2d4787bbdf7313-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:53:31.565730 systemd[1]: var-lib-kubelet-pods-74a6494a\x2d8623\x2d4c74\x2d9278\x2d4787bbdf7313-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Aug 13 00:53:32.088025 kubelet[1897]: I0813 00:53:32.087978 1897 scope.go:117] "RemoveContainer" containerID="ccc863bce289536e38c93e9f6cc154794d2ba2af1ac03071cbe0c6642d12d424" Aug 13 00:53:32.095780 systemd[1]: Removed slice kubepods-burstable-pod74a6494a_8623_4c74_9278_4787bbdf7313.slice. Aug 13 00:53:32.098910 env[1192]: time="2025-08-13T00:53:32.098816556Z" level=info msg="RemoveContainer for \"ccc863bce289536e38c93e9f6cc154794d2ba2af1ac03071cbe0c6642d12d424\"" Aug 13 00:53:32.106491 env[1192]: time="2025-08-13T00:53:32.106424274Z" level=info msg="RemoveContainer for \"ccc863bce289536e38c93e9f6cc154794d2ba2af1ac03071cbe0c6642d12d424\" returns successfully" Aug 13 00:53:32.177315 systemd[1]: Created slice kubepods-burstable-poda64c3fd6_aa82_4f72_a94c_88e0ef035514.slice. Aug 13 00:53:32.259100 kubelet[1897]: I0813 00:53:32.259033 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a64c3fd6-aa82-4f72-a94c-88e0ef035514-lib-modules\") pod \"cilium-wzq95\" (UID: \"a64c3fd6-aa82-4f72-a94c-88e0ef035514\") " pod="kube-system/cilium-wzq95" Aug 13 00:53:32.259564 kubelet[1897]: I0813 00:53:32.259520 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a64c3fd6-aa82-4f72-a94c-88e0ef035514-host-proc-sys-kernel\") pod \"cilium-wzq95\" (UID: \"a64c3fd6-aa82-4f72-a94c-88e0ef035514\") " pod="kube-system/cilium-wzq95" Aug 13 00:53:32.259825 kubelet[1897]: I0813 00:53:32.259795 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a64c3fd6-aa82-4f72-a94c-88e0ef035514-host-proc-sys-net\") pod \"cilium-wzq95\" (UID: \"a64c3fd6-aa82-4f72-a94c-88e0ef035514\") " pod="kube-system/cilium-wzq95" Aug 13 00:53:32.260009 kubelet[1897]: I0813 00:53:32.259976 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a64c3fd6-aa82-4f72-a94c-88e0ef035514-hubble-tls\") pod \"cilium-wzq95\" (UID: \"a64c3fd6-aa82-4f72-a94c-88e0ef035514\") " pod="kube-system/cilium-wzq95" Aug 13 00:53:32.260201 kubelet[1897]: I0813 00:53:32.260168 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a64c3fd6-aa82-4f72-a94c-88e0ef035514-cilium-run\") pod \"cilium-wzq95\" (UID: \"a64c3fd6-aa82-4f72-a94c-88e0ef035514\") " pod="kube-system/cilium-wzq95" Aug 13 00:53:32.260370 kubelet[1897]: I0813 00:53:32.260347 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a64c3fd6-aa82-4f72-a94c-88e0ef035514-bpf-maps\") pod \"cilium-wzq95\" (UID: \"a64c3fd6-aa82-4f72-a94c-88e0ef035514\") " pod="kube-system/cilium-wzq95" Aug 13 00:53:32.260554 kubelet[1897]: I0813 00:53:32.260538 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a64c3fd6-aa82-4f72-a94c-88e0ef035514-xtables-lock\") pod \"cilium-wzq95\" (UID: \"a64c3fd6-aa82-4f72-a94c-88e0ef035514\") " pod="kube-system/cilium-wzq95" Aug 13 00:53:32.260712 kubelet[1897]: I0813 00:53:32.260691 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a64c3fd6-aa82-4f72-a94c-88e0ef035514-clustermesh-secrets\") pod \"cilium-wzq95\" (UID: \"a64c3fd6-aa82-4f72-a94c-88e0ef035514\") " pod="kube-system/cilium-wzq95" Aug 13 00:53:32.260862 kubelet[1897]: I0813 00:53:32.260842 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a64c3fd6-aa82-4f72-a94c-88e0ef035514-cilium-config-path\") pod \"cilium-wzq95\" (UID: \"a64c3fd6-aa82-4f72-a94c-88e0ef035514\") " pod="kube-system/cilium-wzq95" Aug 13 00:53:32.260987 kubelet[1897]: I0813 00:53:32.260965 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmfrq\" (UniqueName: \"kubernetes.io/projected/a64c3fd6-aa82-4f72-a94c-88e0ef035514-kube-api-access-hmfrq\") pod \"cilium-wzq95\" (UID: \"a64c3fd6-aa82-4f72-a94c-88e0ef035514\") " pod="kube-system/cilium-wzq95" Aug 13 00:53:32.261139 kubelet[1897]: I0813 00:53:32.261123 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a64c3fd6-aa82-4f72-a94c-88e0ef035514-hostproc\") pod \"cilium-wzq95\" (UID: \"a64c3fd6-aa82-4f72-a94c-88e0ef035514\") " pod="kube-system/cilium-wzq95" Aug 13 00:53:32.261291 kubelet[1897]: I0813 00:53:32.261261 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a64c3fd6-aa82-4f72-a94c-88e0ef035514-cilium-cgroup\") pod \"cilium-wzq95\" (UID: \"a64c3fd6-aa82-4f72-a94c-88e0ef035514\") " pod="kube-system/cilium-wzq95" Aug 13 00:53:32.261386 kubelet[1897]: I0813 00:53:32.261372 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a64c3fd6-aa82-4f72-a94c-88e0ef035514-cni-path\") pod \"cilium-wzq95\" (UID: \"a64c3fd6-aa82-4f72-a94c-88e0ef035514\") " pod="kube-system/cilium-wzq95" Aug 13 00:53:32.261500 kubelet[1897]: I0813 00:53:32.261487 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a64c3fd6-aa82-4f72-a94c-88e0ef035514-etc-cni-netd\") pod \"cilium-wzq95\" (UID: \"a64c3fd6-aa82-4f72-a94c-88e0ef035514\") " pod="kube-system/cilium-wzq95" Aug 13 00:53:32.261844 kubelet[1897]: I0813 00:53:32.261800 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a64c3fd6-aa82-4f72-a94c-88e0ef035514-cilium-ipsec-secrets\") pod \"cilium-wzq95\" (UID: \"a64c3fd6-aa82-4f72-a94c-88e0ef035514\") " pod="kube-system/cilium-wzq95" Aug 13 00:53:32.481999 kubelet[1897]: E0813 00:53:32.481896 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:32.483077 env[1192]: time="2025-08-13T00:53:32.482670951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wzq95,Uid:a64c3fd6-aa82-4f72-a94c-88e0ef035514,Namespace:kube-system,Attempt:0,}" Aug 13 00:53:32.524101 env[1192]: time="2025-08-13T00:53:32.523708258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:53:32.524101 env[1192]: time="2025-08-13T00:53:32.523812452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:53:32.524101 env[1192]: time="2025-08-13T00:53:32.523833219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:53:32.524509 env[1192]: time="2025-08-13T00:53:32.524238253Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1eb77d0ec01b32989c8aaa2797cbcd026f13fc75c73c81885fb2259e3e3b2640 pid=3807 runtime=io.containerd.runc.v2 Aug 13 00:53:32.548499 systemd[1]: Started cri-containerd-1eb77d0ec01b32989c8aaa2797cbcd026f13fc75c73c81885fb2259e3e3b2640.scope. Aug 13 00:53:32.599315 kubelet[1897]: I0813 00:53:32.598898 1897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74a6494a-8623-4c74-9278-4787bbdf7313" path="/var/lib/kubelet/pods/74a6494a-8623-4c74-9278-4787bbdf7313/volumes" Aug 13 00:53:32.607232 env[1192]: time="2025-08-13T00:53:32.607183697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wzq95,Uid:a64c3fd6-aa82-4f72-a94c-88e0ef035514,Namespace:kube-system,Attempt:0,} returns sandbox id \"1eb77d0ec01b32989c8aaa2797cbcd026f13fc75c73c81885fb2259e3e3b2640\"" Aug 13 00:53:32.610518 kubelet[1897]: E0813 00:53:32.608649 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:32.622920 env[1192]: time="2025-08-13T00:53:32.622789991Z" level=info msg="CreateContainer within sandbox \"1eb77d0ec01b32989c8aaa2797cbcd026f13fc75c73c81885fb2259e3e3b2640\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:53:32.648819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2846398676.mount: Deactivated successfully. Aug 13 00:53:32.660216 env[1192]: time="2025-08-13T00:53:32.660136364Z" level=info msg="CreateContainer within sandbox \"1eb77d0ec01b32989c8aaa2797cbcd026f13fc75c73c81885fb2259e3e3b2640\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"786471805ae3180d54ced0975e6f8a581b6026ca43683b096ee968dbcc906dcf\"" Aug 13 00:53:32.664042 env[1192]: time="2025-08-13T00:53:32.663336193Z" level=info msg="StartContainer for \"786471805ae3180d54ced0975e6f8a581b6026ca43683b096ee968dbcc906dcf\"" Aug 13 00:53:32.703578 systemd[1]: Started cri-containerd-786471805ae3180d54ced0975e6f8a581b6026ca43683b096ee968dbcc906dcf.scope. Aug 13 00:53:32.811660 env[1192]: time="2025-08-13T00:53:32.811508919Z" level=info msg="StartContainer for \"786471805ae3180d54ced0975e6f8a581b6026ca43683b096ee968dbcc906dcf\" returns successfully" Aug 13 00:53:32.865033 systemd[1]: cri-containerd-786471805ae3180d54ced0975e6f8a581b6026ca43683b096ee968dbcc906dcf.scope: Deactivated successfully. Aug 13 00:53:32.919034 env[1192]: time="2025-08-13T00:53:32.918945362Z" level=info msg="shim disconnected" id=786471805ae3180d54ced0975e6f8a581b6026ca43683b096ee968dbcc906dcf Aug 13 00:53:32.919647 env[1192]: time="2025-08-13T00:53:32.919574832Z" level=warning msg="cleaning up after shim disconnected" id=786471805ae3180d54ced0975e6f8a581b6026ca43683b096ee968dbcc906dcf namespace=k8s.io Aug 13 00:53:32.919880 env[1192]: time="2025-08-13T00:53:32.919850309Z" level=info msg="cleaning up dead shim" Aug 13 00:53:32.947208 env[1192]: time="2025-08-13T00:53:32.947139914Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:53:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3886 runtime=io.containerd.runc.v2\n" Aug 13 00:53:33.093820 kubelet[1897]: E0813 00:53:33.093076 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:33.100597 env[1192]: time="2025-08-13T00:53:33.100524734Z" level=info msg="CreateContainer within sandbox \"1eb77d0ec01b32989c8aaa2797cbcd026f13fc75c73c81885fb2259e3e3b2640\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:53:33.120869 env[1192]: time="2025-08-13T00:53:33.120811175Z" level=info msg="CreateContainer within sandbox \"1eb77d0ec01b32989c8aaa2797cbcd026f13fc75c73c81885fb2259e3e3b2640\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"13477b60d537b78bfe3e3f07f8bfe244b01ffc771d1190af4dfb01e5c6362082\"" Aug 13 00:53:33.121922 env[1192]: time="2025-08-13T00:53:33.121867956Z" level=info msg="StartContainer for \"13477b60d537b78bfe3e3f07f8bfe244b01ffc771d1190af4dfb01e5c6362082\"" Aug 13 00:53:33.181079 systemd[1]: Started cri-containerd-13477b60d537b78bfe3e3f07f8bfe244b01ffc771d1190af4dfb01e5c6362082.scope. Aug 13 00:53:33.230553 env[1192]: time="2025-08-13T00:53:33.230482969Z" level=info msg="StartContainer for \"13477b60d537b78bfe3e3f07f8bfe244b01ffc771d1190af4dfb01e5c6362082\" returns successfully" Aug 13 00:53:33.260357 systemd[1]: cri-containerd-13477b60d537b78bfe3e3f07f8bfe244b01ffc771d1190af4dfb01e5c6362082.scope: Deactivated successfully. Aug 13 00:53:33.293692 env[1192]: time="2025-08-13T00:53:33.293609635Z" level=info msg="shim disconnected" id=13477b60d537b78bfe3e3f07f8bfe244b01ffc771d1190af4dfb01e5c6362082 Aug 13 00:53:33.294544 env[1192]: time="2025-08-13T00:53:33.294478537Z" level=warning msg="cleaning up after shim disconnected" id=13477b60d537b78bfe3e3f07f8bfe244b01ffc771d1190af4dfb01e5c6362082 namespace=k8s.io Aug 13 00:53:33.294811 env[1192]: time="2025-08-13T00:53:33.294780312Z" level=info msg="cleaning up dead shim" Aug 13 00:53:33.308313 env[1192]: time="2025-08-13T00:53:33.308225862Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:53:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3948 runtime=io.containerd.runc.v2\n" Aug 13 00:53:33.565458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-786471805ae3180d54ced0975e6f8a581b6026ca43683b096ee968dbcc906dcf-rootfs.mount: Deactivated successfully. Aug 13 00:53:34.102544 kubelet[1897]: E0813 00:53:34.102498 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:34.115837 env[1192]: time="2025-08-13T00:53:34.115760190Z" level=info msg="CreateContainer within sandbox \"1eb77d0ec01b32989c8aaa2797cbcd026f13fc75c73c81885fb2259e3e3b2640\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:53:34.142029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2082036318.mount: Deactivated successfully. Aug 13 00:53:34.154323 kubelet[1897]: W0813 00:53:34.152975 1897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74a6494a_8623_4c74_9278_4787bbdf7313.slice/cri-containerd-ccc863bce289536e38c93e9f6cc154794d2ba2af1ac03071cbe0c6642d12d424.scope WatchSource:0}: container "ccc863bce289536e38c93e9f6cc154794d2ba2af1ac03071cbe0c6642d12d424" in namespace "k8s.io": not found Aug 13 00:53:34.161719 env[1192]: time="2025-08-13T00:53:34.161652009Z" level=info msg="CreateContainer within sandbox \"1eb77d0ec01b32989c8aaa2797cbcd026f13fc75c73c81885fb2259e3e3b2640\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"69fb2ed506c973425426130ea7c79ee922d3d8e7804bc376b5dec7c2da546282\"" Aug 13 00:53:34.165048 env[1192]: time="2025-08-13T00:53:34.164975063Z" level=info msg="StartContainer for \"69fb2ed506c973425426130ea7c79ee922d3d8e7804bc376b5dec7c2da546282\"" Aug 13 00:53:34.207994 systemd[1]: Started cri-containerd-69fb2ed506c973425426130ea7c79ee922d3d8e7804bc376b5dec7c2da546282.scope. Aug 13 00:53:34.262694 env[1192]: time="2025-08-13T00:53:34.262614086Z" level=info msg="StartContainer for \"69fb2ed506c973425426130ea7c79ee922d3d8e7804bc376b5dec7c2da546282\" returns successfully" Aug 13 00:53:34.265099 systemd[1]: cri-containerd-69fb2ed506c973425426130ea7c79ee922d3d8e7804bc376b5dec7c2da546282.scope: Deactivated successfully. Aug 13 00:53:34.311339 env[1192]: time="2025-08-13T00:53:34.311246016Z" level=info msg="shim disconnected" id=69fb2ed506c973425426130ea7c79ee922d3d8e7804bc376b5dec7c2da546282 Aug 13 00:53:34.311339 env[1192]: time="2025-08-13T00:53:34.311331462Z" level=warning msg="cleaning up after shim disconnected" id=69fb2ed506c973425426130ea7c79ee922d3d8e7804bc376b5dec7c2da546282 namespace=k8s.io Aug 13 00:53:34.311339 env[1192]: time="2025-08-13T00:53:34.311346749Z" level=info msg="cleaning up dead shim" Aug 13 00:53:34.327095 env[1192]: time="2025-08-13T00:53:34.326996914Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:53:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4005 runtime=io.containerd.runc.v2\n" Aug 13 00:53:34.614856 kubelet[1897]: E0813 00:53:34.614795 1897 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:53:35.108959 kubelet[1897]: E0813 00:53:35.108907 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:35.114926 env[1192]: time="2025-08-13T00:53:35.114850813Z" level=info msg="CreateContainer within sandbox \"1eb77d0ec01b32989c8aaa2797cbcd026f13fc75c73c81885fb2259e3e3b2640\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:53:35.139591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4073947188.mount: Deactivated successfully. Aug 13 00:53:35.145351 env[1192]: time="2025-08-13T00:53:35.145259417Z" level=info msg="CreateContainer within sandbox \"1eb77d0ec01b32989c8aaa2797cbcd026f13fc75c73c81885fb2259e3e3b2640\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9c8cfc791e33a66b343cea554063004c7dea4f5e5a41ecb537ab7ad04a2a66d2\"" Aug 13 00:53:35.147963 env[1192]: time="2025-08-13T00:53:35.146539935Z" level=info msg="StartContainer for \"9c8cfc791e33a66b343cea554063004c7dea4f5e5a41ecb537ab7ad04a2a66d2\"" Aug 13 00:53:35.192191 systemd[1]: Started cri-containerd-9c8cfc791e33a66b343cea554063004c7dea4f5e5a41ecb537ab7ad04a2a66d2.scope. Aug 13 00:53:35.256290 env[1192]: time="2025-08-13T00:53:35.256188847Z" level=info msg="StartContainer for \"9c8cfc791e33a66b343cea554063004c7dea4f5e5a41ecb537ab7ad04a2a66d2\" returns successfully" Aug 13 00:53:35.263563 systemd[1]: cri-containerd-9c8cfc791e33a66b343cea554063004c7dea4f5e5a41ecb537ab7ad04a2a66d2.scope: Deactivated successfully. Aug 13 00:53:35.302171 env[1192]: time="2025-08-13T00:53:35.302017352Z" level=info msg="shim disconnected" id=9c8cfc791e33a66b343cea554063004c7dea4f5e5a41ecb537ab7ad04a2a66d2 Aug 13 00:53:35.302171 env[1192]: time="2025-08-13T00:53:35.302155609Z" level=warning msg="cleaning up after shim disconnected" id=9c8cfc791e33a66b343cea554063004c7dea4f5e5a41ecb537ab7ad04a2a66d2 namespace=k8s.io Aug 13 00:53:35.302171 env[1192]: time="2025-08-13T00:53:35.302173705Z" level=info msg="cleaning up dead shim" Aug 13 00:53:35.316805 env[1192]: time="2025-08-13T00:53:35.316724195Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:53:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4060 runtime=io.containerd.runc.v2\n" Aug 13 00:53:35.565783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c8cfc791e33a66b343cea554063004c7dea4f5e5a41ecb537ab7ad04a2a66d2-rootfs.mount: Deactivated successfully. Aug 13 00:53:36.114683 kubelet[1897]: E0813 00:53:36.114620 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:36.122431 env[1192]: time="2025-08-13T00:53:36.122355265Z" level=info msg="CreateContainer within sandbox \"1eb77d0ec01b32989c8aaa2797cbcd026f13fc75c73c81885fb2259e3e3b2640\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:53:36.151329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1609289891.mount: Deactivated successfully. Aug 13 00:53:36.166816 env[1192]: time="2025-08-13T00:53:36.166723686Z" level=info msg="CreateContainer within sandbox \"1eb77d0ec01b32989c8aaa2797cbcd026f13fc75c73c81885fb2259e3e3b2640\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c8f0cd88cfb32863454ccd4afd30940f94d54148045a9cc2154e970c7801131d\"" Aug 13 00:53:36.167994 env[1192]: time="2025-08-13T00:53:36.167938720Z" level=info msg="StartContainer for \"c8f0cd88cfb32863454ccd4afd30940f94d54148045a9cc2154e970c7801131d\"" Aug 13 00:53:36.213202 systemd[1]: Started cri-containerd-c8f0cd88cfb32863454ccd4afd30940f94d54148045a9cc2154e970c7801131d.scope. Aug 13 00:53:36.269904 env[1192]: time="2025-08-13T00:53:36.269580630Z" level=info msg="StartContainer for \"c8f0cd88cfb32863454ccd4afd30940f94d54148045a9cc2154e970c7801131d\" returns successfully" Aug 13 00:53:36.950328 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 00:53:37.123767 kubelet[1897]: E0813 00:53:37.123728 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:37.140739 kubelet[1897]: I0813 00:53:37.140666 1897 setters.go:618] "Node became not ready" node="ci-3510.3.8-a-e4f4484119" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:53:37Z","lastTransitionTime":"2025-08-13T00:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:53:37.277189 kubelet[1897]: W0813 00:53:37.276942 1897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda64c3fd6_aa82_4f72_a94c_88e0ef035514.slice/cri-containerd-786471805ae3180d54ced0975e6f8a581b6026ca43683b096ee968dbcc906dcf.scope WatchSource:0}: task 786471805ae3180d54ced0975e6f8a581b6026ca43683b096ee968dbcc906dcf not found Aug 13 00:53:37.390183 systemd[1]: run-containerd-runc-k8s.io-c8f0cd88cfb32863454ccd4afd30940f94d54148045a9cc2154e970c7801131d-runc.rsweGY.mount: Deactivated successfully. Aug 13 00:53:38.484733 kubelet[1897]: E0813 00:53:38.484680 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:39.590708 systemd[1]: run-containerd-runc-k8s.io-c8f0cd88cfb32863454ccd4afd30940f94d54148045a9cc2154e970c7801131d-runc.vAPrG5.mount: Deactivated successfully. Aug 13 00:53:40.397235 kubelet[1897]: W0813 00:53:40.397171 1897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda64c3fd6_aa82_4f72_a94c_88e0ef035514.slice/cri-containerd-13477b60d537b78bfe3e3f07f8bfe244b01ffc771d1190af4dfb01e5c6362082.scope WatchSource:0}: task 13477b60d537b78bfe3e3f07f8bfe244b01ffc771d1190af4dfb01e5c6362082 not found Aug 13 00:53:40.848822 systemd-networkd[1002]: lxc_health: Link UP Aug 13 00:53:40.860387 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:53:40.860016 systemd-networkd[1002]: lxc_health: Gained carrier Aug 13 00:53:41.822947 systemd[1]: run-containerd-runc-k8s.io-c8f0cd88cfb32863454ccd4afd30940f94d54148045a9cc2154e970c7801131d-runc.zgDN38.mount: Deactivated successfully. Aug 13 00:53:42.403494 systemd-networkd[1002]: lxc_health: Gained IPv6LL Aug 13 00:53:42.486379 kubelet[1897]: E0813 00:53:42.486333 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:42.529626 kubelet[1897]: I0813 00:53:42.529513 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wzq95" podStartSLOduration=10.529447208 podStartE2EDuration="10.529447208s" podCreationTimestamp="2025-08-13 00:53:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:53:37.172060124 +0000 UTC m=+113.055967147" watchObservedRunningTime="2025-08-13 00:53:42.529447208 +0000 UTC m=+118.413354243" Aug 13 00:53:43.142799 kubelet[1897]: E0813 00:53:43.142745 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:43.507876 kubelet[1897]: W0813 00:53:43.507803 1897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda64c3fd6_aa82_4f72_a94c_88e0ef035514.slice/cri-containerd-69fb2ed506c973425426130ea7c79ee922d3d8e7804bc376b5dec7c2da546282.scope WatchSource:0}: task 69fb2ed506c973425426130ea7c79ee922d3d8e7804bc376b5dec7c2da546282 not found Aug 13 00:53:44.116092 systemd[1]: run-containerd-runc-k8s.io-c8f0cd88cfb32863454ccd4afd30940f94d54148045a9cc2154e970c7801131d-runc.94G9YJ.mount: Deactivated successfully. Aug 13 00:53:44.143764 kubelet[1897]: E0813 00:53:44.143631 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 00:53:44.430463 env[1192]: time="2025-08-13T00:53:44.429816775Z" level=info msg="StopPodSandbox for \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\"" Aug 13 00:53:44.430463 env[1192]: time="2025-08-13T00:53:44.430002221Z" level=info msg="TearDown network for sandbox \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\" successfully" Aug 13 00:53:44.430463 env[1192]: time="2025-08-13T00:53:44.430059154Z" level=info msg="StopPodSandbox for \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\" returns successfully" Aug 13 00:53:44.431846 env[1192]: time="2025-08-13T00:53:44.431785136Z" level=info msg="RemovePodSandbox for \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\"" Aug 13 00:53:44.432014 env[1192]: time="2025-08-13T00:53:44.431842546Z" level=info msg="Forcibly stopping sandbox \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\"" Aug 13 00:53:44.432014 env[1192]: time="2025-08-13T00:53:44.431966723Z" level=info msg="TearDown network for sandbox \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\" successfully" Aug 13 00:53:44.437970 env[1192]: time="2025-08-13T00:53:44.437898909Z" level=info msg="RemovePodSandbox \"9130c6dda3d861fd7fc9cb5da8f0be14a915a7f7e84fd21461084319a855d04b\" returns successfully" Aug 13 00:53:44.438906 env[1192]: time="2025-08-13T00:53:44.438838549Z" level=info msg="StopPodSandbox for \"fe51e9d8c27238f37015dd6f19ab9f3b5473f6c3f85492fcfd09aa5bf5ca754c\"" Aug 13 00:53:44.439346 env[1192]: time="2025-08-13T00:53:44.439244398Z" level=info msg="TearDown network for sandbox \"fe51e9d8c27238f37015dd6f19ab9f3b5473f6c3f85492fcfd09aa5bf5ca754c\" successfully" Aug 13 00:53:44.439529 env[1192]: time="2025-08-13T00:53:44.439496646Z" level=info msg="StopPodSandbox for \"fe51e9d8c27238f37015dd6f19ab9f3b5473f6c3f85492fcfd09aa5bf5ca754c\" returns successfully" Aug 13 00:53:44.440245 env[1192]: time="2025-08-13T00:53:44.440207702Z" level=info msg="RemovePodSandbox for \"fe51e9d8c27238f37015dd6f19ab9f3b5473f6c3f85492fcfd09aa5bf5ca754c\"" Aug 13 00:53:44.440441 env[1192]: time="2025-08-13T00:53:44.440393590Z" level=info msg="Forcibly stopping sandbox \"fe51e9d8c27238f37015dd6f19ab9f3b5473f6c3f85492fcfd09aa5bf5ca754c\"" Aug 13 00:53:44.440597 env[1192]: time="2025-08-13T00:53:44.440573422Z" level=info msg="TearDown network for sandbox \"fe51e9d8c27238f37015dd6f19ab9f3b5473f6c3f85492fcfd09aa5bf5ca754c\" successfully" Aug 13 00:53:44.445375 env[1192]: time="2025-08-13T00:53:44.445261360Z" level=info msg="RemovePodSandbox \"fe51e9d8c27238f37015dd6f19ab9f3b5473f6c3f85492fcfd09aa5bf5ca754c\" returns successfully" Aug 13 00:53:44.446435 env[1192]: time="2025-08-13T00:53:44.446396299Z" level=info msg="StopPodSandbox for \"e11bbd0c59308becd7525d982fd1de73b9707c62abe5968f041be3890494c9db\"" Aug 13 00:53:44.446935 env[1192]: time="2025-08-13T00:53:44.446869190Z" level=info msg="TearDown network for sandbox \"e11bbd0c59308becd7525d982fd1de73b9707c62abe5968f041be3890494c9db\" successfully" Aug 13 00:53:44.447064 env[1192]: time="2025-08-13T00:53:44.447044179Z" level=info msg="StopPodSandbox for \"e11bbd0c59308becd7525d982fd1de73b9707c62abe5968f041be3890494c9db\" returns successfully" Aug 13 00:53:44.447710 env[1192]: time="2025-08-13T00:53:44.447672469Z" level=info msg="RemovePodSandbox for \"e11bbd0c59308becd7525d982fd1de73b9707c62abe5968f041be3890494c9db\"" Aug 13 00:53:44.447902 env[1192]: time="2025-08-13T00:53:44.447857088Z" level=info msg="Forcibly stopping sandbox \"e11bbd0c59308becd7525d982fd1de73b9707c62abe5968f041be3890494c9db\"" Aug 13 00:53:44.448185 env[1192]: time="2025-08-13T00:53:44.448142066Z" level=info msg="TearDown network for sandbox \"e11bbd0c59308becd7525d982fd1de73b9707c62abe5968f041be3890494c9db\" successfully" Aug 13 00:53:44.451502 env[1192]: time="2025-08-13T00:53:44.451452157Z" level=info msg="RemovePodSandbox \"e11bbd0c59308becd7525d982fd1de73b9707c62abe5968f041be3890494c9db\" returns successfully" Aug 13 00:53:46.357152 systemd[1]: run-containerd-runc-k8s.io-c8f0cd88cfb32863454ccd4afd30940f94d54148045a9cc2154e970c7801131d-runc.MvdUMK.mount: Deactivated successfully. Aug 13 00:53:46.482647 sshd[3672]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:46.488514 systemd-logind[1175]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:53:46.491913 systemd[1]: sshd@26-143.198.60.143:22-139.178.68.195:60090.service: Deactivated successfully. Aug 13 00:53:46.493115 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:53:46.495537 systemd-logind[1175]: Removed session 27. Aug 13 00:53:46.623794 kubelet[1897]: W0813 00:53:46.622774 1897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda64c3fd6_aa82_4f72_a94c_88e0ef035514.slice/cri-containerd-9c8cfc791e33a66b343cea554063004c7dea4f5e5a41ecb537ab7ad04a2a66d2.scope WatchSource:0}: task 9c8cfc791e33a66b343cea554063004c7dea4f5e5a41ecb537ab7ad04a2a66d2 not found