May 17 00:41:23.945866 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 00:41:23.945895 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:41:23.945909 kernel: BIOS-provided physical RAM map: May 17 00:41:23.945915 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 17 00:41:23.945921 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 17 00:41:23.945928 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:41:23.945936 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 17 00:41:23.945942 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 17 00:41:23.945951 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:41:23.945958 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:41:23.945964 kernel: NX (Execute Disable) protection: active May 17 00:41:23.945970 kernel: SMBIOS 2.8 present. May 17 00:41:23.945977 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 17 00:41:23.945983 kernel: Hypervisor detected: KVM May 17 00:41:23.945992 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:41:23.946001 kernel: kvm-clock: cpu 0, msr 6319a001, primary cpu clock May 17 00:41:23.946008 kernel: kvm-clock: using sched offset of 3654239332 cycles May 17 00:41:23.946016 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:41:23.946026 kernel: tsc: Detected 2494.168 MHz processor May 17 00:41:23.946033 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:41:23.946041 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:41:23.946049 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 17 00:41:23.946056 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:41:23.946066 kernel: ACPI: Early table checksum verification disabled May 17 00:41:23.946073 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 17 00:41:23.946080 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:23.946087 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:23.946095 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:23.946102 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 17 00:41:23.946110 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:23.946121 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:23.946134 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:23.946150 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:23.946187 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 17 00:41:23.946198 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 17 00:41:23.946209 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 17 00:41:23.946219 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 17 00:41:23.946230 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 17 00:41:23.946240 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 17 00:41:23.946251 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 17 00:41:23.946268 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:41:23.946279 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:41:23.946290 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 17 00:41:23.946301 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 17 00:41:23.946313 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] May 17 00:41:23.946324 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] May 17 00:41:23.946338 kernel: Zone ranges: May 17 00:41:23.946349 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:41:23.946360 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 17 00:41:23.946370 kernel: Normal empty May 17 00:41:23.946381 kernel: Movable zone start for each node May 17 00:41:23.946392 kernel: Early memory node ranges May 17 00:41:23.946403 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:41:23.946413 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 17 00:41:23.946425 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 17 00:41:23.946441 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:41:23.946459 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:41:23.946471 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 17 00:41:23.952306 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:41:23.952340 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:41:23.952349 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:41:23.952358 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:41:23.952366 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:41:23.952374 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:41:23.952388 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:41:23.952403 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:41:23.952426 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:41:23.952438 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:41:23.952451 kernel: TSC deadline timer available May 17 00:41:23.952462 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:41:23.952474 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 17 00:41:23.952484 kernel: Booting paravirtualized kernel on KVM May 17 00:41:23.952496 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:41:23.952511 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 17 00:41:23.952519 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 17 00:41:23.952527 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 17 00:41:23.952535 kernel: pcpu-alloc: [0] 0 1 May 17 00:41:23.952543 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 May 17 00:41:23.952551 kernel: kvm-guest: PV spinlocks disabled, no host support May 17 00:41:23.952559 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 May 17 00:41:23.952567 kernel: Policy zone: DMA32 May 17 00:41:23.952576 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:41:23.952587 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:41:23.952595 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:41:23.952603 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:41:23.952611 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:41:23.952619 kernel: Memory: 1973276K/2096612K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 123076K reserved, 0K cma-reserved) May 17 00:41:23.952627 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:41:23.952635 kernel: Kernel/User page tables isolation: enabled May 17 00:41:23.952642 kernel: ftrace: allocating 34585 entries in 136 pages May 17 00:41:23.952653 kernel: ftrace: allocated 136 pages with 2 groups May 17 00:41:23.952661 kernel: rcu: Hierarchical RCU implementation. May 17 00:41:23.952669 kernel: rcu: RCU event tracing is enabled. May 17 00:41:23.952677 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:41:23.952685 kernel: Rude variant of Tasks RCU enabled. May 17 00:41:23.952693 kernel: Tracing variant of Tasks RCU enabled. May 17 00:41:23.952700 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:41:23.952708 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:41:23.952716 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:41:23.952726 kernel: random: crng init done May 17 00:41:23.952733 kernel: Console: colour VGA+ 80x25 May 17 00:41:23.952741 kernel: printk: console [tty0] enabled May 17 00:41:23.952749 kernel: printk: console [ttyS0] enabled May 17 00:41:23.952757 kernel: ACPI: Core revision 20210730 May 17 00:41:23.952765 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:41:23.952809 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:41:23.952819 kernel: x2apic enabled May 17 00:41:23.952827 kernel: Switched APIC routing to physical x2apic. May 17 00:41:23.952834 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:41:23.952845 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3b49f4dd, max_idle_ns: 440795279072 ns May 17 00:41:23.952853 kernel: Calibrating delay loop (skipped) preset value.. 4988.33 BogoMIPS (lpj=2494168) May 17 00:41:23.952867 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 17 00:41:23.952875 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 17 00:41:23.952883 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:41:23.952891 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:41:23.952898 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:41:23.952906 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 17 00:41:23.952917 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:41:23.952934 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 17 00:41:23.952942 kernel: MDS: Mitigation: Clear CPU buffers May 17 00:41:23.952953 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:41:23.952961 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:41:23.952969 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:41:23.952977 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:41:23.952986 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:41:23.952994 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 17 00:41:23.953003 kernel: Freeing SMP alternatives memory: 32K May 17 00:41:23.953013 kernel: pid_max: default: 32768 minimum: 301 May 17 00:41:23.953022 kernel: LSM: Security Framework initializing May 17 00:41:23.953030 kernel: SELinux: Initializing. May 17 00:41:23.953039 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:41:23.953047 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:41:23.953055 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 17 00:41:23.953064 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 17 00:41:23.953075 kernel: signal: max sigframe size: 1776 May 17 00:41:23.953083 kernel: rcu: Hierarchical SRCU implementation. May 17 00:41:23.953091 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:41:23.953099 kernel: smp: Bringing up secondary CPUs ... May 17 00:41:23.953108 kernel: x86: Booting SMP configuration: May 17 00:41:23.953116 kernel: .... node #0, CPUs: #1 May 17 00:41:23.953124 kernel: kvm-clock: cpu 1, msr 6319a041, secondary cpu clock May 17 00:41:23.953132 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 May 17 00:41:23.953141 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:41:23.953151 kernel: smpboot: Max logical packages: 1 May 17 00:41:23.953160 kernel: smpboot: Total of 2 processors activated (9976.67 BogoMIPS) May 17 00:41:23.953168 kernel: devtmpfs: initialized May 17 00:41:23.953176 kernel: x86/mm: Memory block size: 128MB May 17 00:41:23.953184 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:41:23.953193 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:41:23.953201 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:41:23.953209 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:41:23.953218 kernel: audit: initializing netlink subsys (disabled) May 17 00:41:23.953229 kernel: audit: type=2000 audit(1747442483.819:1): state=initialized audit_enabled=0 res=1 May 17 00:41:23.953237 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:41:23.953245 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:41:23.953254 kernel: cpuidle: using governor menu May 17 00:41:23.953263 kernel: ACPI: bus type PCI registered May 17 00:41:23.953274 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:41:23.953285 kernel: dca service started, version 1.12.1 May 17 00:41:23.953293 kernel: PCI: Using configuration type 1 for base access May 17 00:41:23.953302 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:41:23.953313 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:41:23.953321 kernel: ACPI: Added _OSI(Module Device) May 17 00:41:23.953330 kernel: ACPI: Added _OSI(Processor Device) May 17 00:41:23.953338 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:41:23.953346 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:41:23.953355 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:41:23.953363 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:41:23.953371 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:41:23.953380 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:41:23.953391 kernel: ACPI: Interpreter enabled May 17 00:41:23.953399 kernel: ACPI: PM: (supports S0 S5) May 17 00:41:23.953407 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:41:23.953415 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:41:23.953424 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 17 00:41:23.953432 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:41:23.953686 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 17 00:41:23.953814 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. May 17 00:41:23.953830 kernel: acpiphp: Slot [3] registered May 17 00:41:23.953839 kernel: acpiphp: Slot [4] registered May 17 00:41:23.953847 kernel: acpiphp: Slot [5] registered May 17 00:41:23.953856 kernel: acpiphp: Slot [6] registered May 17 00:41:23.953864 kernel: acpiphp: Slot [7] registered May 17 00:41:23.953872 kernel: acpiphp: Slot [8] registered May 17 00:41:23.953881 kernel: acpiphp: Slot [9] registered May 17 00:41:23.953890 kernel: acpiphp: Slot [10] registered May 17 00:41:23.953898 kernel: acpiphp: Slot [11] registered May 17 00:41:23.953909 kernel: acpiphp: Slot [12] registered May 17 00:41:23.953917 kernel: acpiphp: Slot [13] registered May 17 00:41:23.953925 kernel: acpiphp: Slot [14] registered May 17 00:41:23.953934 kernel: acpiphp: Slot [15] registered May 17 00:41:23.953942 kernel: acpiphp: Slot [16] registered May 17 00:41:23.953950 kernel: acpiphp: Slot [17] registered May 17 00:41:23.953959 kernel: acpiphp: Slot [18] registered May 17 00:41:23.953967 kernel: acpiphp: Slot [19] registered May 17 00:41:23.953975 kernel: acpiphp: Slot [20] registered May 17 00:41:23.953986 kernel: acpiphp: Slot [21] registered May 17 00:41:23.953994 kernel: acpiphp: Slot [22] registered May 17 00:41:23.954002 kernel: acpiphp: Slot [23] registered May 17 00:41:23.954010 kernel: acpiphp: Slot [24] registered May 17 00:41:23.954019 kernel: acpiphp: Slot [25] registered May 17 00:41:23.954027 kernel: acpiphp: Slot [26] registered May 17 00:41:23.954035 kernel: acpiphp: Slot [27] registered May 17 00:41:23.954043 kernel: acpiphp: Slot [28] registered May 17 00:41:23.954051 kernel: acpiphp: Slot [29] registered May 17 00:41:23.954059 kernel: acpiphp: Slot [30] registered May 17 00:41:23.954070 kernel: acpiphp: Slot [31] registered May 17 00:41:23.954078 kernel: PCI host bridge to bus 0000:00 May 17 00:41:23.954221 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:41:23.955278 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:41:23.955375 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:41:23.955465 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 17 00:41:23.955577 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 17 00:41:23.955713 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:41:23.955931 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 17 00:41:23.956048 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 17 00:41:23.956158 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 17 00:41:23.956253 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] May 17 00:41:23.956343 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 17 00:41:23.956525 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 17 00:41:23.956622 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 17 00:41:23.956716 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 17 00:41:23.956851 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 May 17 00:41:23.956946 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] May 17 00:41:23.957064 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 17 00:41:23.957157 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 17 00:41:23.957254 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 17 00:41:23.957359 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 17 00:41:23.957455 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 17 00:41:23.957574 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] May 17 00:41:23.957714 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] May 17 00:41:23.957908 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] May 17 00:41:23.958048 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:41:23.958194 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 17 00:41:23.958307 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] May 17 00:41:23.958402 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] May 17 00:41:23.958513 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] May 17 00:41:23.958622 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 17 00:41:23.958726 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] May 17 00:41:23.958838 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] May 17 00:41:23.958931 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] May 17 00:41:23.959092 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 May 17 00:41:23.959295 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] May 17 00:41:23.959451 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] May 17 00:41:23.959561 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] May 17 00:41:23.959680 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 May 17 00:41:23.971882 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] May 17 00:41:23.972131 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] May 17 00:41:23.972281 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] May 17 00:41:23.972478 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 May 17 00:41:23.972620 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] May 17 00:41:23.972752 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] May 17 00:41:23.972920 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] May 17 00:41:23.973086 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 May 17 00:41:23.973226 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] May 17 00:41:23.973354 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] May 17 00:41:23.973371 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:41:23.973385 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:41:23.973400 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:41:23.973414 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:41:23.973434 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 17 00:41:23.973448 kernel: iommu: Default domain type: Translated May 17 00:41:23.973463 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:41:23.973600 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 17 00:41:23.973734 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:41:23.973886 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 17 00:41:23.973905 kernel: vgaarb: loaded May 17 00:41:23.973921 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:41:23.973936 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:41:23.973956 kernel: PTP clock support registered May 17 00:41:23.973971 kernel: PCI: Using ACPI for IRQ routing May 17 00:41:23.973986 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:41:23.974000 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 17 00:41:23.974015 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 17 00:41:23.974029 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:41:23.974044 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:41:23.974059 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:41:23.974074 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:41:23.974123 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:41:23.974139 kernel: pnp: PnP ACPI init May 17 00:41:23.974154 kernel: pnp: PnP ACPI: found 4 devices May 17 00:41:23.974169 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:41:23.974184 kernel: NET: Registered PF_INET protocol family May 17 00:41:23.974199 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:41:23.974214 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 17 00:41:23.974229 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:41:23.974248 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:41:23.974263 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) May 17 00:41:23.974275 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 17 00:41:23.974288 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:41:23.974302 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:41:23.974317 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:41:23.974333 kernel: NET: Registered PF_XDP protocol family May 17 00:41:23.974475 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:41:23.974595 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:41:23.974732 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:41:23.974864 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 17 00:41:23.975002 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 17 00:41:23.975141 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 17 00:41:23.975279 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 17 00:41:23.975414 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds May 17 00:41:23.975435 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 17 00:41:23.975568 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 29900 usecs May 17 00:41:23.975595 kernel: PCI: CLS 0 bytes, default 64 May 17 00:41:23.975611 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:41:23.975627 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3b49f4dd, max_idle_ns: 440795279072 ns May 17 00:41:23.975642 kernel: Initialise system trusted keyrings May 17 00:41:23.975657 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 17 00:41:23.975670 kernel: Key type asymmetric registered May 17 00:41:23.975683 kernel: Asymmetric key parser 'x509' registered May 17 00:41:23.975697 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:41:23.975712 kernel: io scheduler mq-deadline registered May 17 00:41:23.975730 kernel: io scheduler kyber registered May 17 00:41:23.975743 kernel: io scheduler bfq registered May 17 00:41:23.975758 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:41:23.975791 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 17 00:41:23.975806 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 17 00:41:23.975820 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 17 00:41:23.975835 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:41:23.975849 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:41:23.975862 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:41:23.975880 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:41:23.975895 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:41:23.975909 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:41:23.976122 kernel: rtc_cmos 00:03: RTC can wake from S4 May 17 00:41:23.976257 kernel: rtc_cmos 00:03: registered as rtc0 May 17 00:41:23.976388 kernel: rtc_cmos 00:03: setting system clock to 2025-05-17T00:41:23 UTC (1747442483) May 17 00:41:23.976526 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 17 00:41:23.976551 kernel: intel_pstate: CPU model not supported May 17 00:41:23.976563 kernel: NET: Registered PF_INET6 protocol family May 17 00:41:23.976577 kernel: Segment Routing with IPv6 May 17 00:41:23.976589 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:41:23.976601 kernel: NET: Registered PF_PACKET protocol family May 17 00:41:23.976613 kernel: Key type dns_resolver registered May 17 00:41:23.976625 kernel: IPI shorthand broadcast: enabled May 17 00:41:23.976637 kernel: sched_clock: Marking stable (665056426, 82750417)->(777990415, -30183572) May 17 00:41:23.976649 kernel: registered taskstats version 1 May 17 00:41:23.976661 kernel: Loading compiled-in X.509 certificates May 17 00:41:23.976677 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 00:41:23.976689 kernel: Key type .fscrypt registered May 17 00:41:23.976701 kernel: Key type fscrypt-provisioning registered May 17 00:41:23.976713 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:41:23.976725 kernel: ima: Allocated hash algorithm: sha1 May 17 00:41:23.976736 kernel: ima: No architecture policies found May 17 00:41:23.976747 kernel: clk: Disabling unused clocks May 17 00:41:23.976759 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 00:41:23.976789 kernel: Write protecting the kernel read-only data: 28672k May 17 00:41:23.976800 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 00:41:23.976812 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 00:41:23.976823 kernel: Run /init as init process May 17 00:41:23.976834 kernel: with arguments: May 17 00:41:23.976845 kernel: /init May 17 00:41:23.976900 kernel: with environment: May 17 00:41:23.976914 kernel: HOME=/ May 17 00:41:23.976926 kernel: TERM=linux May 17 00:41:23.976938 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:41:23.976958 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:41:23.976973 systemd[1]: Detected virtualization kvm. May 17 00:41:23.976986 systemd[1]: Detected architecture x86-64. May 17 00:41:23.976998 systemd[1]: Running in initrd. May 17 00:41:23.977013 systemd[1]: No hostname configured, using default hostname. May 17 00:41:23.977029 systemd[1]: Hostname set to . May 17 00:41:23.977048 systemd[1]: Initializing machine ID from VM UUID. May 17 00:41:23.977064 systemd[1]: Queued start job for default target initrd.target. May 17 00:41:23.977080 systemd[1]: Started systemd-ask-password-console.path. May 17 00:41:23.977096 systemd[1]: Reached target cryptsetup.target. May 17 00:41:23.977113 systemd[1]: Reached target paths.target. May 17 00:41:23.977128 systemd[1]: Reached target slices.target. May 17 00:41:23.977144 systemd[1]: Reached target swap.target. May 17 00:41:23.977159 systemd[1]: Reached target timers.target. May 17 00:41:23.977178 systemd[1]: Listening on iscsid.socket. May 17 00:41:23.977190 systemd[1]: Listening on iscsiuio.socket. May 17 00:41:23.977203 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:41:23.977215 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:41:23.977228 systemd[1]: Listening on systemd-journald.socket. May 17 00:41:23.977244 systemd[1]: Listening on systemd-networkd.socket. May 17 00:41:23.977260 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:41:23.977276 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:41:23.977290 systemd[1]: Reached target sockets.target. May 17 00:41:23.977308 systemd[1]: Starting kmod-static-nodes.service... May 17 00:41:23.977322 systemd[1]: Finished network-cleanup.service. May 17 00:41:23.977339 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:41:23.977353 systemd[1]: Starting systemd-journald.service... May 17 00:41:23.977370 systemd[1]: Starting systemd-modules-load.service... May 17 00:41:23.977388 systemd[1]: Starting systemd-resolved.service... May 17 00:41:23.977403 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:41:23.977419 systemd[1]: Finished kmod-static-nodes.service. May 17 00:41:23.977435 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:41:23.977451 kernel: audit: type=1130 audit(1747442483.941:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:23.977468 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:41:23.977491 systemd-journald[184]: Journal started May 17 00:41:23.977591 systemd-journald[184]: Runtime Journal (/run/log/journal/809c5e588036478092242d10c26a867f) is 4.9M, max 39.5M, 34.5M free. May 17 00:41:23.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:23.959972 systemd-modules-load[185]: Inserted module 'overlay' May 17 00:41:23.999236 systemd[1]: Started systemd-journald.service. May 17 00:41:23.981378 systemd-resolved[186]: Positive Trust Anchors: May 17 00:41:23.981389 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:41:24.016834 kernel: audit: type=1130 audit(1747442483.997:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:24.016865 kernel: audit: type=1130 audit(1747442483.997:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:24.016878 kernel: audit: type=1130 audit(1747442484.006:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:24.016901 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:41:24.016914 kernel: Bridge firewalling registered May 17 00:41:23.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:23.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:24.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:23.981426 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:41:24.022092 kernel: audit: type=1130 audit(1747442484.016:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:24.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:23.984223 systemd-resolved[186]: Defaulting to hostname 'linux'. May 17 00:41:23.998497 systemd[1]: Started systemd-resolved.service. May 17 00:41:24.003469 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:41:24.007167 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:41:24.017567 systemd[1]: Reached target nss-lookup.target. May 17 00:41:24.020893 systemd-modules-load[185]: Inserted module 'br_netfilter' May 17 00:41:24.026091 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:41:24.046951 kernel: SCSI subsystem initialized May 17 00:41:24.044881 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:41:24.046527 systemd[1]: Starting dracut-cmdline.service... May 17 00:41:24.052892 kernel: audit: type=1130 audit(1747442484.044:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:24.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:24.060706 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:41:24.060807 kernel: device-mapper: uevent: version 1.0.3 May 17 00:41:24.060829 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:41:24.061429 dracut-cmdline[203]: dracut-dracut-053 May 17 00:41:24.063968 systemd-modules-load[185]: Inserted module 'dm_multipath' May 17 00:41:24.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:24.067705 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:41:24.081483 kernel: audit: type=1130 audit(1747442484.064:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:24.064764 systemd[1]: Finished systemd-modules-load.service. May 17 00:41:24.065944 systemd[1]: Starting systemd-sysctl.service... May 17 00:41:24.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:24.083345 systemd[1]: Finished systemd-sysctl.service. May 17 00:41:24.086863 kernel: audit: type=1130 audit(1747442484.083:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:24.156824 kernel: Loading iSCSI transport class v2.0-870. May 17 00:41:24.176810 kernel: iscsi: registered transport (tcp) May 17 00:41:24.201900 kernel: iscsi: registered transport (qla4xxx) May 17 00:41:24.201984 kernel: QLogic iSCSI HBA Driver May 17 00:41:24.247411 systemd[1]: Finished dracut-cmdline.service. May 17 00:41:24.249034 systemd[1]: Starting dracut-pre-udev.service... May 17 00:41:24.252176 kernel: audit: type=1130 audit(1747442484.247:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:24.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:24.305849 kernel: raid6: avx2x4 gen() 16855 MB/s May 17 00:41:24.322851 kernel: raid6: avx2x4 xor() 9706 MB/s May 17 00:41:24.339834 kernel: raid6: avx2x2 gen() 17693 MB/s May 17 00:41:24.356828 kernel: raid6: avx2x2 xor() 20470 MB/s May 17 00:41:24.373834 kernel: raid6: avx2x1 gen() 13239 MB/s May 17 00:41:24.390841 kernel: raid6: avx2x1 xor() 17968 MB/s May 17 00:41:24.407837 kernel: raid6: sse2x4 gen() 12115 MB/s May 17 00:41:24.424840 kernel: raid6: sse2x4 xor() 7013 MB/s May 17 00:41:24.441839 kernel: raid6: sse2x2 gen() 12925 MB/s May 17 00:41:24.458840 kernel: raid6: sse2x2 xor() 8489 MB/s May 17 00:41:24.475840 kernel: raid6: sse2x1 gen() 11900 MB/s May 17 00:41:24.493157 kernel: raid6: sse2x1 xor() 6039 MB/s May 17 00:41:24.493232 kernel: raid6: using algorithm avx2x2 gen() 17693 MB/s May 17 00:41:24.493245 kernel: raid6: .... xor() 20470 MB/s, rmw enabled May 17 00:41:24.494224 kernel: raid6: using avx2x2 recovery algorithm May 17 00:41:24.508817 kernel: xor: automatically using best checksumming function avx May 17 00:41:24.622814 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 00:41:24.634180 systemd[1]: Finished dracut-pre-udev.service. May 17 00:41:24.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:24.634000 audit: BPF prog-id=7 op=LOAD May 17 00:41:24.634000 audit: BPF prog-id=8 op=LOAD May 17 00:41:24.635495 systemd[1]: Starting systemd-udevd.service... May 17 00:41:24.650952 systemd-udevd[385]: Using default interface naming scheme 'v252'. May 17 00:41:24.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:24.657045 systemd[1]: Started systemd-udevd.service. May 17 00:41:24.658442 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:41:24.674464 dracut-pre-trigger[390]: rd.md=0: removing MD RAID activation May 17 00:41:24.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:24.710655 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:41:24.711986 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:41:24.765100 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:41:24.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:24.833085 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 17 00:41:24.911063 kernel: scsi host0: Virtio SCSI HBA May 17 00:41:24.911339 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:41:24.911363 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:41:24.911380 kernel: AES CTR mode by8 optimization enabled May 17 00:41:24.911397 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:41:24.911415 kernel: GPT:9289727 != 125829119 May 17 00:41:24.911432 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:41:24.911449 kernel: GPT:9289727 != 125829119 May 17 00:41:24.911465 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:41:24.911481 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:41:24.911502 kernel: libata version 3.00 loaded. May 17 00:41:24.911517 kernel: ata_piix 0000:00:01.1: version 2.13 May 17 00:41:24.917627 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) May 17 00:41:24.917764 kernel: scsi host1: ata_piix May 17 00:41:24.917910 kernel: scsi host2: ata_piix May 17 00:41:24.918018 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 May 17 00:41:24.918036 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 May 17 00:41:24.939812 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (435) May 17 00:41:24.945268 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:41:24.976554 kernel: ACPI: bus type USB registered May 17 00:41:24.976603 kernel: usbcore: registered new interface driver usbfs May 17 00:41:24.976621 kernel: usbcore: registered new interface driver hub May 17 00:41:24.976638 kernel: usbcore: registered new device driver usb May 17 00:41:24.980612 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:41:24.987531 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:41:24.990818 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:41:24.991876 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:41:24.993756 systemd[1]: Starting disk-uuid.service... May 17 00:41:24.999953 disk-uuid[504]: Primary Header is updated. May 17 00:41:24.999953 disk-uuid[504]: Secondary Entries is updated. May 17 00:41:24.999953 disk-uuid[504]: Secondary Header is updated. May 17 00:41:25.008812 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:41:25.012823 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:41:25.032827 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:41:25.095848 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver May 17 00:41:25.112811 kernel: ehci-pci: EHCI PCI platform driver May 17 00:41:25.139802 kernel: uhci_hcd: USB Universal Host Controller Interface driver May 17 00:41:25.160034 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 17 00:41:25.163456 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 17 00:41:25.163642 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 17 00:41:25.163767 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 May 17 00:41:25.163930 kernel: hub 1-0:1.0: USB hub found May 17 00:41:25.164108 kernel: hub 1-0:1.0: 2 ports detected May 17 00:41:26.019459 disk-uuid[505]: The operation has completed successfully. May 17 00:41:26.020052 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:41:26.069846 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:41:26.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.069943 systemd[1]: Finished disk-uuid.service. May 17 00:41:26.071310 systemd[1]: Starting verity-setup.service... May 17 00:41:26.091807 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:41:26.142199 systemd[1]: Found device dev-mapper-usr.device. May 17 00:41:26.143839 systemd[1]: Mounting sysusr-usr.mount... May 17 00:41:26.145029 systemd[1]: Finished verity-setup.service. May 17 00:41:26.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.233819 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:41:26.234832 systemd[1]: Mounted sysusr-usr.mount. May 17 00:41:26.235470 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:41:26.236324 systemd[1]: Starting ignition-setup.service... May 17 00:41:26.238069 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:41:26.253898 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:41:26.253973 kernel: BTRFS info (device vda6): using free space tree May 17 00:41:26.253990 kernel: BTRFS info (device vda6): has skinny extents May 17 00:41:26.268079 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:41:26.274680 systemd[1]: Finished ignition-setup.service. May 17 00:41:26.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.276391 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:41:26.393694 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:41:26.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.394000 audit: BPF prog-id=9 op=LOAD May 17 00:41:26.395860 systemd[1]: Starting systemd-networkd.service... May 17 00:41:26.422495 ignition[609]: Ignition 2.14.0 May 17 00:41:26.422507 ignition[609]: Stage: fetch-offline May 17 00:41:26.422583 ignition[609]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:26.422623 ignition[609]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:26.426012 systemd-networkd[690]: lo: Link UP May 17 00:41:26.426650 systemd-networkd[690]: lo: Gained carrier May 17 00:41:26.427364 ignition[609]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:26.427514 ignition[609]: parsed url from cmdline: "" May 17 00:41:26.427520 ignition[609]: no config URL provided May 17 00:41:26.428941 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:41:26.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.427529 ignition[609]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:41:26.427544 ignition[609]: no config at "/usr/lib/ignition/user.ign" May 17 00:41:26.430279 systemd-networkd[690]: Enumeration completed May 17 00:41:26.427553 ignition[609]: failed to fetch config: resource requires networking May 17 00:41:26.430826 systemd-networkd[690]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:41:26.427917 ignition[609]: Ignition finished successfully May 17 00:41:26.431871 systemd-networkd[690]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 17 00:41:26.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.434017 systemd[1]: Started systemd-networkd.service. May 17 00:41:26.434262 systemd-networkd[690]: eth1: Link UP May 17 00:41:26.434268 systemd-networkd[690]: eth1: Gained carrier May 17 00:41:26.435260 systemd[1]: Reached target network.target. May 17 00:41:26.437210 systemd[1]: Starting ignition-fetch.service... May 17 00:41:26.439807 systemd-networkd[690]: eth0: Link UP May 17 00:41:26.439811 systemd-networkd[690]: eth0: Gained carrier May 17 00:41:26.447319 systemd[1]: Starting iscsiuio.service... May 17 00:41:26.459267 ignition[692]: Ignition 2.14.0 May 17 00:41:26.459282 ignition[692]: Stage: fetch May 17 00:41:26.459517 ignition[692]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:26.459565 ignition[692]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:26.462506 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:26.462700 ignition[692]: parsed url from cmdline: "" May 17 00:41:26.462909 systemd-networkd[690]: eth1: DHCPv4 address 10.124.0.21/20 acquired from 169.254.169.253 May 17 00:41:26.462705 ignition[692]: no config URL provided May 17 00:41:26.462711 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:41:26.462724 ignition[692]: no config at "/usr/lib/ignition/user.ign" May 17 00:41:26.462757 ignition[692]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 17 00:41:26.465940 systemd-networkd[690]: eth0: DHCPv4 address 64.23.148.252/20, gateway 64.23.144.1 acquired from 169.254.169.253 May 17 00:41:26.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.469419 systemd[1]: Started iscsiuio.service. May 17 00:41:26.471670 systemd[1]: Starting iscsid.service... May 17 00:41:26.477728 iscsid[701]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:41:26.477728 iscsid[701]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:41:26.477728 iscsid[701]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:41:26.477728 iscsid[701]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:41:26.477728 iscsid[701]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:41:26.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.484682 iscsid[701]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:41:26.480075 systemd[1]: Started iscsid.service. May 17 00:41:26.482539 systemd[1]: Starting dracut-initqueue.service... May 17 00:41:26.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.501194 systemd[1]: Finished dracut-initqueue.service. May 17 00:41:26.501711 systemd[1]: Reached target remote-fs-pre.target. May 17 00:41:26.502027 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:41:26.502332 systemd[1]: Reached target remote-fs.target. May 17 00:41:26.504150 systemd[1]: Starting dracut-pre-mount.service... May 17 00:41:26.516395 systemd[1]: Finished dracut-pre-mount.service. May 17 00:41:26.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.519911 ignition[692]: GET result: OK May 17 00:41:26.520106 ignition[692]: parsing config with SHA512: 8453d7f29d4c6296ce0755b8642bb7d08d7d25a8e532bcb499004d1f980a97070318fbec7b8e86a59fc08bf97f3a5e3d5eaf06f88c0005aad80edd3444a2a035 May 17 00:41:26.530031 unknown[692]: fetched base config from "system" May 17 00:41:26.530047 unknown[692]: fetched base config from "system" May 17 00:41:26.530057 unknown[692]: fetched user config from "digitalocean" May 17 00:41:26.530847 ignition[692]: fetch: fetch complete May 17 00:41:26.530857 ignition[692]: fetch: fetch passed May 17 00:41:26.533264 systemd[1]: Finished ignition-fetch.service. May 17 00:41:26.530931 ignition[692]: Ignition finished successfully May 17 00:41:26.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.535036 systemd[1]: Starting ignition-kargs.service... May 17 00:41:26.552856 ignition[715]: Ignition 2.14.0 May 17 00:41:26.552871 ignition[715]: Stage: kargs May 17 00:41:26.553078 ignition[715]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:26.553108 ignition[715]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:26.556547 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:26.559298 ignition[715]: kargs: kargs passed May 17 00:41:26.559377 ignition[715]: Ignition finished successfully May 17 00:41:26.560449 systemd[1]: Finished ignition-kargs.service. May 17 00:41:26.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.562176 systemd[1]: Starting ignition-disks.service... May 17 00:41:26.577572 ignition[721]: Ignition 2.14.0 May 17 00:41:26.577583 ignition[721]: Stage: disks May 17 00:41:26.577823 ignition[721]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:26.577853 ignition[721]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:26.580377 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:26.582728 ignition[721]: disks: disks passed May 17 00:41:26.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.583670 systemd[1]: Finished ignition-disks.service. May 17 00:41:26.582814 ignition[721]: Ignition finished successfully May 17 00:41:26.584178 systemd[1]: Reached target initrd-root-device.target. May 17 00:41:26.584824 systemd[1]: Reached target local-fs-pre.target. May 17 00:41:26.585492 systemd[1]: Reached target local-fs.target. May 17 00:41:26.586167 systemd[1]: Reached target sysinit.target. May 17 00:41:26.586939 systemd[1]: Reached target basic.target. May 17 00:41:26.588764 systemd[1]: Starting systemd-fsck-root.service... May 17 00:41:26.608255 systemd-fsck[729]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 17 00:41:26.611461 systemd[1]: Finished systemd-fsck-root.service. May 17 00:41:26.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.612884 systemd[1]: Mounting sysroot.mount... May 17 00:41:26.623212 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:41:26.622419 systemd[1]: Mounted sysroot.mount. May 17 00:41:26.622829 systemd[1]: Reached target initrd-root-fs.target. May 17 00:41:26.625105 systemd[1]: Mounting sysroot-usr.mount... May 17 00:41:26.626286 systemd[1]: Starting flatcar-digitalocean-network.service... May 17 00:41:26.630504 systemd[1]: Starting flatcar-metadata-hostname.service... May 17 00:41:26.630953 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:41:26.631000 systemd[1]: Reached target ignition-diskful.target. May 17 00:41:26.632573 systemd[1]: Mounted sysroot-usr.mount. May 17 00:41:26.634337 systemd[1]: Starting initrd-setup-root.service... May 17 00:41:26.643677 initrd-setup-root[741]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:41:26.660241 initrd-setup-root[749]: cut: /sysroot/etc/group: No such file or directory May 17 00:41:26.667953 initrd-setup-root[757]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:41:26.675587 initrd-setup-root[765]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:41:26.756210 systemd[1]: Finished initrd-setup-root.service. May 17 00:41:26.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.758280 systemd[1]: Starting ignition-mount.service... May 17 00:41:26.759598 systemd[1]: Starting sysroot-boot.service... May 17 00:41:26.764692 coreos-metadata[735]: May 17 00:41:26.763 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:41:26.772492 bash[786]: umount: /sysroot/usr/share/oem: not mounted. May 17 00:41:26.782829 coreos-metadata[735]: May 17 00:41:26.779 INFO Fetch successful May 17 00:41:26.786940 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. May 17 00:41:26.787092 systemd[1]: Finished flatcar-digitalocean-network.service. May 17 00:41:26.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.799876 coreos-metadata[736]: May 17 00:41:26.797 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:41:26.801470 ignition[787]: INFO : Ignition 2.14.0 May 17 00:41:26.802498 ignition[787]: INFO : Stage: mount May 17 00:41:26.803108 ignition[787]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:26.803650 ignition[787]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:26.806549 ignition[787]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:26.809232 ignition[787]: INFO : mount: mount passed May 17 00:41:26.809624 ignition[787]: INFO : Ignition finished successfully May 17 00:41:26.810897 systemd[1]: Finished ignition-mount.service. May 17 00:41:26.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.812593 coreos-metadata[736]: May 17 00:41:26.812 INFO Fetch successful May 17 00:41:26.816967 coreos-metadata[736]: May 17 00:41:26.816 INFO wrote hostname ci-3510.3.7-n-8f6d6c1823 to /sysroot/etc/hostname May 17 00:41:26.818578 systemd[1]: Finished flatcar-metadata-hostname.service. May 17 00:41:26.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.821163 systemd[1]: Finished sysroot-boot.service. May 17 00:41:26.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:27.160993 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:41:27.170828 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (795) May 17 00:41:27.172922 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:41:27.172988 kernel: BTRFS info (device vda6): using free space tree May 17 00:41:27.173002 kernel: BTRFS info (device vda6): has skinny extents May 17 00:41:27.179358 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:41:27.187120 systemd[1]: Starting ignition-files.service... May 17 00:41:27.205951 ignition[815]: INFO : Ignition 2.14.0 May 17 00:41:27.205951 ignition[815]: INFO : Stage: files May 17 00:41:27.207087 ignition[815]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:27.207087 ignition[815]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:27.208378 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:27.213312 ignition[815]: DEBUG : files: compiled without relabeling support, skipping May 17 00:41:27.215008 ignition[815]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:41:27.215008 ignition[815]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:41:27.217491 ignition[815]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:41:27.218418 ignition[815]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:41:27.219965 unknown[815]: wrote ssh authorized keys file for user: core May 17 00:41:27.222006 ignition[815]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:41:27.222006 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:41:27.222006 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 17 00:41:27.268927 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:41:27.459235 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:41:27.460124 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:41:27.460124 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 17 00:41:27.719188 systemd-networkd[690]: eth0: Gained IPv6LL May 17 00:41:27.899519 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:41:27.984639 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:41:27.984639 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:41:27.986109 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:41:27.986109 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:41:27.986109 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:41:27.986109 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:41:27.986109 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:41:27.986109 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:41:27.986109 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:41:27.986109 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:41:27.986109 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:41:27.986109 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:41:27.986109 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:41:27.986109 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:41:27.986109 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 17 00:41:28.423071 systemd-networkd[690]: eth1: Gained IPv6LL May 17 00:41:28.479009 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 00:41:28.772999 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:41:28.772999 ignition[815]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" May 17 00:41:28.772999 ignition[815]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" May 17 00:41:28.772999 ignition[815]: INFO : files: op(d): [started] processing unit "prepare-helm.service" May 17 00:41:28.776715 ignition[815]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:41:28.776715 ignition[815]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:41:28.776715 ignition[815]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" May 17 00:41:28.776715 ignition[815]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 00:41:28.776715 ignition[815]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 00:41:28.776715 ignition[815]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 17 00:41:28.776715 ignition[815]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:41:28.782601 ignition[815]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:41:28.782601 ignition[815]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:41:28.782601 ignition[815]: INFO : files: files passed May 17 00:41:28.782601 ignition[815]: INFO : Ignition finished successfully May 17 00:41:28.790985 kernel: kauditd_printk_skb: 28 callbacks suppressed May 17 00:41:28.791013 kernel: audit: type=1130 audit(1747442488.783:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.784319 systemd[1]: Finished ignition-files.service. May 17 00:41:28.786096 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:41:28.791833 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:41:28.794439 systemd[1]: Starting ignition-quench.service... May 17 00:41:28.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.797723 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:41:28.807981 kernel: audit: type=1130 audit(1747442488.797:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.808026 kernel: audit: type=1131 audit(1747442488.797:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.808039 kernel: audit: type=1130 audit(1747442488.803:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.808147 initrd-setup-root-after-ignition[840]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:41:28.797874 systemd[1]: Finished ignition-quench.service. May 17 00:41:28.803325 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:41:28.804172 systemd[1]: Reached target ignition-complete.target. May 17 00:41:28.809309 systemd[1]: Starting initrd-parse-etc.service... May 17 00:41:28.832583 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:41:28.832701 systemd[1]: Finished initrd-parse-etc.service. May 17 00:41:28.840152 kernel: audit: type=1130 audit(1747442488.832:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.840192 kernel: audit: type=1131 audit(1747442488.832:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.833734 systemd[1]: Reached target initrd-fs.target. May 17 00:41:28.840475 systemd[1]: Reached target initrd.target. May 17 00:41:28.841277 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:41:28.842506 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:41:28.860684 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:41:28.864740 kernel: audit: type=1130 audit(1747442488.860:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.862872 systemd[1]: Starting initrd-cleanup.service... May 17 00:41:28.877372 systemd[1]: Stopped target nss-lookup.target. May 17 00:41:28.878305 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:41:28.879164 systemd[1]: Stopped target timers.target. May 17 00:41:28.879962 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:41:28.880554 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:41:28.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.890810 kernel: audit: type=1131 audit(1747442488.886:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.890095 systemd[1]: Stopped target initrd.target. May 17 00:41:28.890588 systemd[1]: Stopped target basic.target. May 17 00:41:28.891094 systemd[1]: Stopped target ignition-complete.target. May 17 00:41:28.891730 systemd[1]: Stopped target ignition-diskful.target. May 17 00:41:28.892470 systemd[1]: Stopped target initrd-root-device.target. May 17 00:41:28.893013 systemd[1]: Stopped target remote-fs.target. May 17 00:41:28.893673 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:41:28.894259 systemd[1]: Stopped target sysinit.target. May 17 00:41:28.894938 systemd[1]: Stopped target local-fs.target. May 17 00:41:28.895501 systemd[1]: Stopped target local-fs-pre.target. May 17 00:41:28.896335 systemd[1]: Stopped target swap.target. May 17 00:41:28.899922 kernel: audit: type=1131 audit(1747442488.896:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.896896 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:41:28.897088 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:41:28.897709 systemd[1]: Stopped target cryptsetup.target. May 17 00:41:28.903393 kernel: audit: type=1131 audit(1747442488.900:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.900249 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:41:28.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.900493 systemd[1]: Stopped dracut-initqueue.service. May 17 00:41:28.901370 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:41:28.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.901512 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:41:28.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.903957 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:41:28.904120 systemd[1]: Stopped ignition-files.service. May 17 00:41:28.904893 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:41:28.905020 systemd[1]: Stopped flatcar-metadata-hostname.service. May 17 00:41:28.906736 systemd[1]: Stopping ignition-mount.service... May 17 00:41:28.913721 iscsid[701]: iscsid shutting down. May 17 00:41:28.907701 systemd[1]: Stopping iscsid.service... May 17 00:41:28.915218 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:41:28.915391 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:41:28.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.917263 systemd[1]: Stopping sysroot-boot.service... May 17 00:41:28.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.917607 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:41:28.932741 ignition[853]: INFO : Ignition 2.14.0 May 17 00:41:28.932741 ignition[853]: INFO : Stage: umount May 17 00:41:28.932741 ignition[853]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:28.932741 ignition[853]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:28.932741 ignition[853]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:28.932741 ignition[853]: INFO : umount: umount passed May 17 00:41:28.932741 ignition[853]: INFO : Ignition finished successfully May 17 00:41:28.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.917746 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:41:28.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.918311 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:41:28.918432 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:41:28.921025 systemd[1]: iscsid.service: Deactivated successfully. May 17 00:41:28.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.921144 systemd[1]: Stopped iscsid.service. May 17 00:41:28.922746 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:41:28.923831 systemd[1]: Finished initrd-cleanup.service. May 17 00:41:28.925157 systemd[1]: Stopping iscsiuio.service... May 17 00:41:28.928662 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 00:41:28.928756 systemd[1]: Stopped iscsiuio.service. May 17 00:41:28.932883 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:41:28.932969 systemd[1]: Stopped ignition-mount.service. May 17 00:41:28.934479 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:41:28.934531 systemd[1]: Stopped ignition-disks.service. May 17 00:41:28.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.941478 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:41:28.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.941537 systemd[1]: Stopped ignition-kargs.service. May 17 00:41:28.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.941938 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:41:28.942022 systemd[1]: Stopped ignition-fetch.service. May 17 00:41:28.942366 systemd[1]: Stopped target network.target. May 17 00:41:28.942662 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:41:28.942704 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:41:28.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.943043 systemd[1]: Stopped target paths.target. May 17 00:41:28.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.943303 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:41:28.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.943851 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:41:28.945319 systemd[1]: Stopped target slices.target. May 17 00:41:28.945574 systemd[1]: Stopped target sockets.target. May 17 00:41:28.945899 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:41:28.945941 systemd[1]: Closed iscsid.socket. May 17 00:41:28.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.946215 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:41:28.946243 systemd[1]: Closed iscsiuio.socket. May 17 00:41:28.946509 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:41:28.946547 systemd[1]: Stopped ignition-setup.service. May 17 00:41:28.948411 systemd[1]: Stopping systemd-networkd.service... May 17 00:41:28.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.972000 audit: BPF prog-id=6 op=UNLOAD May 17 00:41:28.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.949129 systemd[1]: Stopping systemd-resolved.service... May 17 00:41:28.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.951133 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:41:28.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.951703 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:41:28.951830 systemd[1]: Stopped sysroot-boot.service. May 17 00:41:28.951882 systemd-networkd[690]: eth0: DHCPv6 lease lost May 17 00:41:28.952941 systemd-networkd[690]: eth1: DHCPv6 lease lost May 17 00:41:28.980000 audit: BPF prog-id=9 op=UNLOAD May 17 00:41:28.953680 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:41:28.953905 systemd[1]: Stopped initrd-setup-root.service. May 17 00:41:28.955054 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:41:28.955151 systemd[1]: Stopped systemd-networkd.service. May 17 00:41:28.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.956098 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:41:28.956132 systemd[1]: Closed systemd-networkd.socket. May 17 00:41:28.957730 systemd[1]: Stopping network-cleanup.service... May 17 00:41:28.959739 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:41:28.959829 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:41:28.960730 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:41:28.960902 systemd[1]: Stopped systemd-sysctl.service. May 17 00:41:28.961426 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:41:28.961479 systemd[1]: Stopped systemd-modules-load.service. May 17 00:41:28.966324 systemd[1]: Stopping systemd-udevd.service... May 17 00:41:28.967952 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:41:28.968513 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:41:28.968602 systemd[1]: Stopped systemd-resolved.service. May 17 00:41:28.971918 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:41:28.972094 systemd[1]: Stopped systemd-udevd.service. May 17 00:41:28.972912 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:41:28.972963 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:41:28.973313 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:41:28.973342 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:41:28.973689 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:41:28.973735 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:41:28.974090 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:41:28.974124 systemd[1]: Stopped dracut-cmdline.service. May 17 00:41:28.974686 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:41:28.974720 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:41:28.976016 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:41:28.977413 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:41:28.977478 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:41:28.978308 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:41:28.978454 systemd[1]: Stopped network-cleanup.service. May 17 00:41:28.983597 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:41:28.983737 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:41:28.984353 systemd[1]: Reached target initrd-switch-root.target. May 17 00:41:28.985811 systemd[1]: Starting initrd-switch-root.service... May 17 00:41:29.001225 systemd[1]: Switching root. May 17 00:41:29.025252 systemd-journald[184]: Journal stopped May 17 00:41:32.607450 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). May 17 00:41:32.607530 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:41:32.607554 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:41:32.607567 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:41:32.607586 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:41:32.607611 kernel: SELinux: policy capability open_perms=1 May 17 00:41:32.607624 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:41:32.607636 kernel: SELinux: policy capability always_check_network=0 May 17 00:41:32.607648 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:41:32.607660 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:41:32.607672 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:41:32.607683 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:41:32.607695 systemd[1]: Successfully loaded SELinux policy in 52.872ms. May 17 00:41:32.607722 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.902ms. May 17 00:41:32.607741 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:41:32.607754 systemd[1]: Detected virtualization kvm. May 17 00:41:32.609818 systemd[1]: Detected architecture x86-64. May 17 00:41:32.609862 systemd[1]: Detected first boot. May 17 00:41:32.609876 systemd[1]: Hostname set to . May 17 00:41:32.609889 systemd[1]: Initializing machine ID from VM UUID. May 17 00:41:32.609902 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:41:32.609930 systemd[1]: Populated /etc with preset unit settings. May 17 00:41:32.609955 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:41:32.609976 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:41:32.609990 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:41:32.610014 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:41:32.610028 systemd[1]: Stopped initrd-switch-root.service. May 17 00:41:32.610047 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:41:32.610066 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:41:32.610088 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:41:32.610106 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 17 00:41:32.610120 systemd[1]: Created slice system-getty.slice. May 17 00:41:32.610137 systemd[1]: Created slice system-modprobe.slice. May 17 00:41:32.610154 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:41:32.610166 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:41:32.610178 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:41:32.610190 systemd[1]: Created slice user.slice. May 17 00:41:32.610203 systemd[1]: Started systemd-ask-password-console.path. May 17 00:41:32.610222 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:41:32.610234 systemd[1]: Set up automount boot.automount. May 17 00:41:32.610248 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:41:32.610263 systemd[1]: Stopped target initrd-switch-root.target. May 17 00:41:32.610281 systemd[1]: Stopped target initrd-fs.target. May 17 00:41:32.610297 systemd[1]: Stopped target initrd-root-fs.target. May 17 00:41:32.610319 systemd[1]: Reached target integritysetup.target. May 17 00:41:32.610332 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:41:32.610351 systemd[1]: Reached target remote-fs.target. May 17 00:41:32.610364 systemd[1]: Reached target slices.target. May 17 00:41:32.610377 systemd[1]: Reached target swap.target. May 17 00:41:32.610389 systemd[1]: Reached target torcx.target. May 17 00:41:32.610405 systemd[1]: Reached target veritysetup.target. May 17 00:41:32.610440 systemd[1]: Listening on systemd-coredump.socket. May 17 00:41:32.610458 systemd[1]: Listening on systemd-initctl.socket. May 17 00:41:32.610477 systemd[1]: Listening on systemd-networkd.socket. May 17 00:41:32.610500 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:41:32.610512 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:41:32.610525 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:41:32.610537 systemd[1]: Mounting dev-hugepages.mount... May 17 00:41:32.610549 systemd[1]: Mounting dev-mqueue.mount... May 17 00:41:32.610562 systemd[1]: Mounting media.mount... May 17 00:41:32.610577 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:32.610592 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:41:32.610606 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:41:32.610628 systemd[1]: Mounting tmp.mount... May 17 00:41:32.610641 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:41:32.610653 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:32.610666 systemd[1]: Starting kmod-static-nodes.service... May 17 00:41:32.610685 systemd[1]: Starting modprobe@configfs.service... May 17 00:41:32.610697 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:32.610709 systemd[1]: Starting modprobe@drm.service... May 17 00:41:32.610721 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:41:32.610734 systemd[1]: Starting modprobe@fuse.service... May 17 00:41:32.610751 systemd[1]: Starting modprobe@loop.service... May 17 00:41:32.610764 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:41:32.610811 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:41:32.610824 systemd[1]: Stopped systemd-fsck-root.service. May 17 00:41:32.610836 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:41:32.610848 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:41:32.610860 systemd[1]: Stopped systemd-journald.service. May 17 00:41:32.610873 systemd[1]: Starting systemd-journald.service... May 17 00:41:32.610886 systemd[1]: Starting systemd-modules-load.service... May 17 00:41:32.610903 systemd[1]: Starting systemd-network-generator.service... May 17 00:41:32.610916 systemd[1]: Starting systemd-remount-fs.service... May 17 00:41:32.610928 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:41:32.610942 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:41:32.610962 kernel: loop: module loaded May 17 00:41:32.610982 systemd[1]: Stopped verity-setup.service. May 17 00:41:32.610998 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:32.611017 systemd[1]: Mounted dev-hugepages.mount. May 17 00:41:32.611036 systemd[1]: Mounted dev-mqueue.mount. May 17 00:41:32.611056 systemd[1]: Mounted media.mount. May 17 00:41:32.611069 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:41:32.611082 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:41:32.611094 systemd[1]: Mounted tmp.mount. May 17 00:41:32.611107 systemd[1]: Finished kmod-static-nodes.service. May 17 00:41:32.611119 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:41:32.611132 systemd[1]: Finished modprobe@configfs.service. May 17 00:41:32.611144 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:32.611157 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:32.611175 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:41:32.611187 systemd[1]: Finished modprobe@drm.service. May 17 00:41:32.611200 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:41:32.611212 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:41:32.611224 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:41:32.611242 systemd[1]: Finished modprobe@loop.service. May 17 00:41:32.611255 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:41:32.611279 systemd[1]: Finished systemd-modules-load.service. May 17 00:41:32.611295 systemd[1]: Finished systemd-network-generator.service. May 17 00:41:32.611310 systemd[1]: Finished systemd-remount-fs.service. May 17 00:41:32.611325 systemd[1]: Reached target network-pre.target. May 17 00:41:32.611343 kernel: fuse: init (API version 7.34) May 17 00:41:32.611358 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:41:32.611375 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:41:32.611401 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:41:32.611421 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:41:32.611438 systemd[1]: Starting systemd-random-seed.service... May 17 00:41:32.611451 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:41:32.611466 systemd[1]: Starting systemd-sysctl.service... May 17 00:41:32.611505 systemd[1]: Starting systemd-sysusers.service... May 17 00:41:32.611524 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:41:32.611546 systemd[1]: Finished modprobe@fuse.service. May 17 00:41:32.611560 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:41:32.611578 systemd[1]: Finished systemd-random-seed.service. May 17 00:41:32.611596 systemd[1]: Reached target first-boot-complete.target. May 17 00:41:32.611615 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:41:32.611641 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:41:32.611661 systemd-journald[960]: Journal started May 17 00:41:32.611739 systemd-journald[960]: Runtime Journal (/run/log/journal/809c5e588036478092242d10c26a867f) is 4.9M, max 39.5M, 34.5M free. May 17 00:41:29.187000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:41:29.239000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:41:29.239000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:41:29.239000 audit: BPF prog-id=10 op=LOAD May 17 00:41:29.239000 audit: BPF prog-id=10 op=UNLOAD May 17 00:41:29.239000 audit: BPF prog-id=11 op=LOAD May 17 00:41:29.239000 audit: BPF prog-id=11 op=UNLOAD May 17 00:41:29.350000 audit[886]: AVC avc: denied { associate } for pid=886 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:41:29.350000 audit[886]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178c2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=869 pid=886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:29.350000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:41:29.352000 audit[886]: AVC avc: denied { associate } for pid=886 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 00:41:29.352000 audit[886]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000117999 a2=1ed a3=0 items=2 ppid=869 pid=886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:29.352000 audit: CWD cwd="/" May 17 00:41:29.352000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:29.352000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:29.352000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:41:32.365000 audit: BPF prog-id=12 op=LOAD May 17 00:41:32.365000 audit: BPF prog-id=3 op=UNLOAD May 17 00:41:32.365000 audit: BPF prog-id=13 op=LOAD May 17 00:41:32.365000 audit: BPF prog-id=14 op=LOAD May 17 00:41:32.365000 audit: BPF prog-id=4 op=UNLOAD May 17 00:41:32.365000 audit: BPF prog-id=5 op=UNLOAD May 17 00:41:32.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.373000 audit: BPF prog-id=12 op=UNLOAD May 17 00:41:32.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.472000 audit: BPF prog-id=15 op=LOAD May 17 00:41:32.472000 audit: BPF prog-id=16 op=LOAD May 17 00:41:32.472000 audit: BPF prog-id=17 op=LOAD May 17 00:41:32.472000 audit: BPF prog-id=13 op=UNLOAD May 17 00:41:32.472000 audit: BPF prog-id=14 op=UNLOAD May 17 00:41:32.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.577000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:41:32.577000 audit[960]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe4fffd080 a2=4000 a3=7ffe4fffd11c items=0 ppid=1 pid=960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:32.613977 systemd[1]: Started systemd-journald.service. May 17 00:41:32.577000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:41:32.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:29.347220 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:41:32.363709 systemd[1]: Queued start job for default target multi-user.target. May 17 00:41:29.347812 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:29Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:41:32.363724 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 17 00:41:29.347834 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:29Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:41:32.367602 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:41:29.347879 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:29Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 17 00:41:32.615327 systemd[1]: Starting systemd-journal-flush.service... May 17 00:41:29.347894 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:29Z" level=debug msg="skipped missing lower profile" missing profile=oem May 17 00:41:29.347943 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:29Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 17 00:41:29.347958 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:29Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 17 00:41:29.348195 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:29Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 17 00:41:29.348252 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:29Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:41:32.625158 systemd-journald[960]: Time spent on flushing to /var/log/journal/809c5e588036478092242d10c26a867f is 41.794ms for 1136 entries. May 17 00:41:32.625158 systemd-journald[960]: System Journal (/var/log/journal/809c5e588036478092242d10c26a867f) is 8.0M, max 195.6M, 187.6M free. May 17 00:41:32.678699 systemd-journald[960]: Received client request to flush runtime journal. May 17 00:41:32.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:29.348331 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:29Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:41:32.625092 systemd[1]: Finished systemd-sysctl.service. May 17 00:41:29.350079 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:29Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 17 00:41:32.647410 systemd[1]: Finished systemd-sysusers.service. May 17 00:41:29.350134 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:29Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 17 00:41:32.668297 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:41:29.350157 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:29Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 17 00:41:32.670088 systemd[1]: Starting systemd-udev-settle.service... May 17 00:41:29.350172 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:29Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 17 00:41:29.350204 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:29Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 17 00:41:29.350226 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:29Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 17 00:41:31.922191 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:31Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:41:31.923178 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:31Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:41:31.923431 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:31Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:41:31.923981 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:31Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:41:31.924082 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:31Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 17 00:41:32.681804 systemd[1]: Finished systemd-journal-flush.service. May 17 00:41:31.924226 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-05-17T00:41:31Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 17 00:41:32.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.683806 udevadm[995]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:41:33.254470 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:41:33.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:33.254000 audit: BPF prog-id=18 op=LOAD May 17 00:41:33.255000 audit: BPF prog-id=19 op=LOAD May 17 00:41:33.255000 audit: BPF prog-id=7 op=UNLOAD May 17 00:41:33.255000 audit: BPF prog-id=8 op=UNLOAD May 17 00:41:33.256814 systemd[1]: Starting systemd-udevd.service... May 17 00:41:33.278089 systemd-udevd[997]: Using default interface naming scheme 'v252'. May 17 00:41:33.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:33.306000 audit: BPF prog-id=20 op=LOAD May 17 00:41:33.305619 systemd[1]: Started systemd-udevd.service. May 17 00:41:33.308586 systemd[1]: Starting systemd-networkd.service... May 17 00:41:33.315000 audit: BPF prog-id=21 op=LOAD May 17 00:41:33.315000 audit: BPF prog-id=22 op=LOAD May 17 00:41:33.315000 audit: BPF prog-id=23 op=LOAD May 17 00:41:33.317496 systemd[1]: Starting systemd-userdbd.service... May 17 00:41:33.375212 systemd[1]: Started systemd-userdbd.service. May 17 00:41:33.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:33.383360 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 17 00:41:33.412609 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:33.412859 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:33.414382 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:33.416058 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:41:33.418868 systemd[1]: Starting modprobe@loop.service... May 17 00:41:33.419322 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:41:33.419430 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:41:33.419553 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:33.420245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:33.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:33.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:33.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:33.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:33.421067 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:33.421974 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:41:33.422168 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:41:33.424087 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:41:33.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:33.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:33.426495 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:41:33.426631 systemd[1]: Finished modprobe@loop.service. May 17 00:41:33.427247 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:41:33.478239 systemd-networkd[1003]: lo: Link UP May 17 00:41:33.478251 systemd-networkd[1003]: lo: Gained carrier May 17 00:41:33.478947 systemd-networkd[1003]: Enumeration completed May 17 00:41:33.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:33.479046 systemd-networkd[1003]: eth1: Configuring with /run/systemd/network/10-b2:25:ab:ff:06:a6.network. May 17 00:41:33.479071 systemd[1]: Started systemd-networkd.service. May 17 00:41:33.480634 systemd-networkd[1003]: eth0: Configuring with /run/systemd/network/10-4e:03:dd:27:34:a2.network. May 17 00:41:33.481537 systemd-networkd[1003]: eth1: Link UP May 17 00:41:33.481546 systemd-networkd[1003]: eth1: Gained carrier May 17 00:41:33.487162 systemd-networkd[1003]: eth0: Link UP May 17 00:41:33.487173 systemd-networkd[1003]: eth0: Gained carrier May 17 00:41:33.492814 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:41:33.501803 kernel: ACPI: button: Power Button [PWRF] May 17 00:41:33.524382 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:41:33.529000 audit[1011]: AVC avc: denied { confidentiality } for pid=1011 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:41:33.529000 audit[1011]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=557d208b7310 a1=338ac a2=7f3e5edb4bc5 a3=5 items=110 ppid=997 pid=1011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:33.529000 audit: CWD cwd="/" May 17 00:41:33.529000 audit: PATH item=0 name=(null) inode=1039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=1 name=(null) inode=13819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=2 name=(null) inode=13819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=3 name=(null) inode=13820 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=4 name=(null) inode=13819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=5 name=(null) inode=13821 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=6 name=(null) inode=13819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=7 name=(null) inode=13822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=8 name=(null) inode=13822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=9 name=(null) inode=13823 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=10 name=(null) inode=13822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=11 name=(null) inode=13824 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=12 name=(null) inode=13822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=13 name=(null) inode=13825 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=14 name=(null) inode=13822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=15 name=(null) inode=13826 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=16 name=(null) inode=13822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=17 name=(null) inode=13827 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=18 name=(null) inode=13819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=19 name=(null) inode=13828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=20 name=(null) inode=13828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=21 name=(null) inode=13829 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=22 name=(null) inode=13828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=23 name=(null) inode=13830 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=24 name=(null) inode=13828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=25 name=(null) inode=13831 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=26 name=(null) inode=13828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=27 name=(null) inode=13832 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=28 name=(null) inode=13828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=29 name=(null) inode=13833 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=30 name=(null) inode=13819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=31 name=(null) inode=13834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=32 name=(null) inode=13834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=33 name=(null) inode=13835 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=34 name=(null) inode=13834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=35 name=(null) inode=13836 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=36 name=(null) inode=13834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=37 name=(null) inode=13837 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=38 name=(null) inode=13834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=39 name=(null) inode=13838 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=40 name=(null) inode=13834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=41 name=(null) inode=13839 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=42 name=(null) inode=13819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=43 name=(null) inode=13840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=44 name=(null) inode=13840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=45 name=(null) inode=13841 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=46 name=(null) inode=13840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=47 name=(null) inode=13842 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=48 name=(null) inode=13840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=49 name=(null) inode=13843 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=50 name=(null) inode=13840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=51 name=(null) inode=13844 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=52 name=(null) inode=13840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=53 name=(null) inode=13845 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=54 name=(null) inode=1039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=55 name=(null) inode=13846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=56 name=(null) inode=13846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=57 name=(null) inode=13847 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=58 name=(null) inode=13846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=59 name=(null) inode=13848 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=60 name=(null) inode=13846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=61 name=(null) inode=13849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=62 name=(null) inode=13849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=63 name=(null) inode=13850 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=64 name=(null) inode=13849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=65 name=(null) inode=13851 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=66 name=(null) inode=13849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=67 name=(null) inode=13852 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=68 name=(null) inode=13849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=69 name=(null) inode=13853 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=70 name=(null) inode=13849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=71 name=(null) inode=13854 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=72 name=(null) inode=13846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=73 name=(null) inode=13855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=74 name=(null) inode=13855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=75 name=(null) inode=13856 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=76 name=(null) inode=13855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=77 name=(null) inode=13857 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=78 name=(null) inode=13855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=79 name=(null) inode=13858 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=80 name=(null) inode=13855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=81 name=(null) inode=13859 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=82 name=(null) inode=13855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=83 name=(null) inode=13860 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=84 name=(null) inode=13846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=85 name=(null) inode=13861 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=86 name=(null) inode=13861 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=87 name=(null) inode=13862 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=88 name=(null) inode=13861 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=89 name=(null) inode=13863 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=90 name=(null) inode=13861 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=91 name=(null) inode=13864 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=92 name=(null) inode=13861 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=93 name=(null) inode=13865 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=94 name=(null) inode=13861 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=95 name=(null) inode=13866 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=96 name=(null) inode=13846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=97 name=(null) inode=13867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=98 name=(null) inode=13867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=99 name=(null) inode=13868 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=100 name=(null) inode=13867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=101 name=(null) inode=13869 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=102 name=(null) inode=13867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=103 name=(null) inode=13870 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=104 name=(null) inode=13867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=105 name=(null) inode=13871 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=106 name=(null) inode=13867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=107 name=(null) inode=13872 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PATH item=109 name=(null) inode=13873 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:33.529000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:41:33.561841 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 17 00:41:33.571805 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 17 00:41:33.583813 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:41:33.699806 kernel: EDAC MC: Ver: 3.0.0 May 17 00:41:33.724467 systemd[1]: Finished systemd-udev-settle.service. May 17 00:41:33.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:33.726203 systemd[1]: Starting lvm2-activation-early.service... May 17 00:41:33.751520 lvm[1035]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:41:33.779270 systemd[1]: Finished lvm2-activation-early.service. May 17 00:41:33.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:33.779940 systemd[1]: Reached target cryptsetup.target. May 17 00:41:33.781835 systemd[1]: Starting lvm2-activation.service... May 17 00:41:33.787520 lvm[1036]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:41:33.813410 systemd[1]: Finished lvm2-activation.service. May 17 00:41:33.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:33.814039 systemd[1]: Reached target local-fs-pre.target. May 17 00:41:33.817130 kernel: kauditd_printk_skb: 229 callbacks suppressed May 17 00:41:33.817255 kernel: audit: type=1130 audit(1747442493.813:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:33.820297 systemd[1]: Mounting media-configdrive.mount... May 17 00:41:33.820865 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:41:33.820942 systemd[1]: Reached target machines.target. May 17 00:41:33.823069 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:41:33.841361 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:41:33.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:33.844459 systemd[1]: Mounted media-configdrive.mount. May 17 00:41:33.846875 kernel: ISO 9660 Extensions: RRIP_1991A May 17 00:41:33.846985 kernel: audit: type=1130 audit(1747442493.841:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:33.845543 systemd[1]: Reached target local-fs.target. May 17 00:41:33.848094 systemd[1]: Starting ldconfig.service... May 17 00:41:33.850185 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:41:33.850243 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:33.851513 systemd[1]: Starting systemd-boot-update.service... May 17 00:41:33.854179 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:41:33.857180 systemd[1]: Starting systemd-sysext.service... May 17 00:41:33.872405 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:41:33.878224 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1042 (bootctl) May 17 00:41:33.880698 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:41:33.888120 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:41:33.888350 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:41:33.916845 kernel: loop0: detected capacity change from 0 to 224512 May 17 00:41:33.979561 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:41:33.981197 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:41:33.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:33.984907 kernel: audit: type=1130 audit(1747442493.980:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.008801 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:41:34.026805 kernel: loop1: detected capacity change from 0 to 224512 May 17 00:41:34.048976 (sd-sysext)[1052]: Using extensions 'kubernetes'. May 17 00:41:34.050318 (sd-sysext)[1052]: Merged extensions into '/usr'. May 17 00:41:34.052305 systemd-fsck[1049]: fsck.fat 4.2 (2021-01-31) May 17 00:41:34.052305 systemd-fsck[1049]: /dev/vda1: 790 files, 120726/258078 clusters May 17 00:41:34.057217 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:41:34.060928 kernel: audit: type=1130 audit(1747442494.056:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.059328 systemd[1]: Mounting boot.mount... May 17 00:41:34.092572 systemd[1]: Mounted boot.mount. May 17 00:41:34.095960 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:34.098806 systemd[1]: Mounting usr-share-oem.mount... May 17 00:41:34.103275 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:34.108294 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:34.114728 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:41:34.117875 systemd[1]: Starting modprobe@loop.service... May 17 00:41:34.118711 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:41:34.119171 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:34.119625 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:34.122575 systemd[1]: Finished systemd-boot-update.service. May 17 00:41:34.125900 kernel: audit: type=1130 audit(1747442494.122:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.123556 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:34.123705 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:34.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.126702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:41:34.126867 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:41:34.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.135813 kernel: audit: type=1130 audit(1747442494.125:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.135929 kernel: audit: type=1131 audit(1747442494.125:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.135953 kernel: audit: type=1130 audit(1747442494.131:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.135974 kernel: audit: type=1131 audit(1747442494.132:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.135254 systemd[1]: Mounted usr-share-oem.mount. May 17 00:41:34.145071 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:41:34.145248 systemd[1]: Finished modprobe@loop.service. May 17 00:41:34.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.146331 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:41:34.146452 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:41:34.150344 kernel: audit: type=1130 audit(1747442494.144:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.151632 systemd[1]: Finished systemd-sysext.service. May 17 00:41:34.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.156960 systemd[1]: Starting ensure-sysext.service... May 17 00:41:34.159734 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:41:34.169990 systemd[1]: Reloading. May 17 00:41:34.205944 systemd-tmpfiles[1060]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:41:34.214088 systemd-tmpfiles[1060]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:41:34.222407 systemd-tmpfiles[1060]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:41:34.304341 /usr/lib/systemd/system-generators/torcx-generator[1080]: time="2025-05-17T00:41:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:41:34.310508 /usr/lib/systemd/system-generators/torcx-generator[1080]: time="2025-05-17T00:41:34Z" level=info msg="torcx already run" May 17 00:41:34.363303 ldconfig[1041]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:41:34.410159 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:41:34.410180 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:41:34.434043 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:41:34.494000 audit: BPF prog-id=24 op=LOAD May 17 00:41:34.494000 audit: BPF prog-id=21 op=UNLOAD May 17 00:41:34.494000 audit: BPF prog-id=25 op=LOAD May 17 00:41:34.494000 audit: BPF prog-id=26 op=LOAD May 17 00:41:34.494000 audit: BPF prog-id=22 op=UNLOAD May 17 00:41:34.494000 audit: BPF prog-id=23 op=UNLOAD May 17 00:41:34.495000 audit: BPF prog-id=27 op=LOAD May 17 00:41:34.496000 audit: BPF prog-id=15 op=UNLOAD May 17 00:41:34.496000 audit: BPF prog-id=28 op=LOAD May 17 00:41:34.496000 audit: BPF prog-id=29 op=LOAD May 17 00:41:34.496000 audit: BPF prog-id=16 op=UNLOAD May 17 00:41:34.496000 audit: BPF prog-id=17 op=UNLOAD May 17 00:41:34.496000 audit: BPF prog-id=30 op=LOAD May 17 00:41:34.496000 audit: BPF prog-id=31 op=LOAD May 17 00:41:34.496000 audit: BPF prog-id=18 op=UNLOAD May 17 00:41:34.496000 audit: BPF prog-id=19 op=UNLOAD May 17 00:41:34.499000 audit: BPF prog-id=32 op=LOAD May 17 00:41:34.499000 audit: BPF prog-id=20 op=UNLOAD May 17 00:41:34.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.503048 systemd[1]: Finished ldconfig.service. May 17 00:41:34.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.505005 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:41:34.509278 systemd[1]: Starting audit-rules.service... May 17 00:41:34.511290 systemd[1]: Starting clean-ca-certificates.service... May 17 00:41:34.513601 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:41:34.517000 audit: BPF prog-id=33 op=LOAD May 17 00:41:34.521168 systemd[1]: Starting systemd-resolved.service... May 17 00:41:34.522000 audit: BPF prog-id=34 op=LOAD May 17 00:41:34.524980 systemd[1]: Starting systemd-timesyncd.service... May 17 00:41:34.527106 systemd[1]: Starting systemd-update-utmp.service... May 17 00:41:34.529699 systemd[1]: Finished clean-ca-certificates.service. May 17 00:41:34.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.533168 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:41:34.539557 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:34.541502 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:34.544407 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:41:34.547930 systemd[1]: Starting modprobe@loop.service... May 17 00:41:34.548461 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:41:34.548694 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:34.548915 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:41:34.549958 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:34.550475 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:34.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.553023 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:41:34.553186 systemd[1]: Finished modprobe@loop.service. May 17 00:41:34.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.554191 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:41:34.554347 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:41:34.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.558643 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:34.560732 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:34.564684 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:41:34.567218 systemd[1]: Starting modprobe@loop.service... May 17 00:41:34.567703 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:41:34.568411 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:34.568608 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:41:34.569736 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:34.570933 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:34.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.576787 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:41:34.576950 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:41:34.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.578013 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:34.580178 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:34.583730 systemd[1]: Starting modprobe@drm.service... May 17 00:41:34.584351 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:41:34.584530 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:34.588705 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:41:34.589257 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:41:34.589536 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:41:34.590724 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:41:34.590944 systemd[1]: Finished modprobe@loop.service. May 17 00:41:34.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.595557 systemd[1]: Finished ensure-sysext.service. May 17 00:41:34.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.597819 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:41:34.598624 systemd[1]: Finished modprobe@drm.service. May 17 00:41:34.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.598000 audit[1133]: SYSTEM_BOOT pid=1133 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:41:34.601758 systemd[1]: Finished systemd-update-utmp.service. May 17 00:41:34.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.610487 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:34.610636 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:34.611181 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:41:34.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.620553 augenrules[1156]: No rules May 17 00:41:34.619000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:41:34.619000 audit[1156]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd22d32700 a2=420 a3=0 items=0 ppid=1127 pid=1156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:34.619000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:41:34.621684 systemd[1]: Finished audit-rules.service. May 17 00:41:34.629715 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:41:34.631611 systemd[1]: Starting systemd-update-done.service... May 17 00:41:34.632442 systemd-networkd[1003]: eth1: Gained IPv6LL May 17 00:41:34.637842 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:41:34.641900 systemd[1]: Finished systemd-update-done.service. May 17 00:41:34.665576 systemd[1]: Started systemd-timesyncd.service. May 17 00:41:34.666067 systemd[1]: Reached target time-set.target. May 17 00:41:34.669012 systemd-resolved[1131]: Positive Trust Anchors: May 17 00:41:34.669361 systemd-resolved[1131]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:41:34.669464 systemd-resolved[1131]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:41:34.675336 systemd-resolved[1131]: Using system hostname 'ci-3510.3.7-n-8f6d6c1823'. May 17 00:41:34.677769 systemd[1]: Started systemd-resolved.service. May 17 00:41:34.678245 systemd[1]: Reached target network.target. May 17 00:41:34.678533 systemd[1]: Reached target network-online.target. May 17 00:41:34.678837 systemd[1]: Reached target nss-lookup.target. May 17 00:41:34.679174 systemd[1]: Reached target sysinit.target. May 17 00:41:34.679648 systemd[1]: Started motdgen.path. May 17 00:41:34.679995 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:41:34.680599 systemd[1]: Started logrotate.timer. May 17 00:41:34.681012 systemd[1]: Started mdadm.timer. May 17 00:41:34.681303 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:41:34.681610 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:41:34.681645 systemd[1]: Reached target paths.target. May 17 00:41:34.681919 systemd[1]: Reached target timers.target. May 17 00:41:34.682541 systemd[1]: Listening on dbus.socket. May 17 00:41:34.684220 systemd[1]: Starting docker.socket... May 17 00:41:34.688242 systemd[1]: Listening on sshd.socket. May 17 00:41:34.689175 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:34.690010 systemd[1]: Listening on docker.socket. May 17 00:41:34.690622 systemd[1]: Reached target sockets.target. May 17 00:41:34.691185 systemd[1]: Reached target basic.target. May 17 00:41:34.691679 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:41:34.691720 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:41:34.693463 systemd[1]: Starting containerd.service... May 17 00:41:34.695002 systemd-networkd[1003]: eth0: Gained IPv6LL May 17 00:41:34.696310 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 17 00:41:34.699730 systemd[1]: Starting dbus.service... May 17 00:41:34.706381 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:41:34.711495 systemd[1]: Starting extend-filesystems.service... May 17 00:41:34.712221 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:41:34.713246 jq[1170]: false May 17 00:41:34.715208 systemd[1]: Starting kubelet.service... May 17 00:41:34.718806 systemd[1]: Starting motdgen.service... May 17 00:41:34.723692 systemd[1]: Starting prepare-helm.service... May 17 00:41:34.726096 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:41:34.730589 systemd[1]: Starting sshd-keygen.service... May 17 00:41:34.737931 systemd[1]: Starting systemd-logind.service... May 17 00:41:34.738932 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:34.739124 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:41:34.740046 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:41:34.745246 systemd[1]: Starting update-engine.service... May 17 00:41:34.748847 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:41:34.759181 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:41:34.759612 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:41:34.765971 jq[1187]: true May 17 00:41:34.787921 tar[1191]: linux-amd64/LICENSE May 17 00:41:34.787921 tar[1191]: linux-amd64/helm May 17 00:41:34.789267 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:41:34.789511 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:41:34.790549 jq[1193]: true May 17 00:41:35.786004 systemd-resolved[1131]: Clock change detected. Flushing caches. May 17 00:41:35.786164 systemd-timesyncd[1132]: Contacted time server 216.177.181.129:123 (0.flatcar.pool.ntp.org). May 17 00:41:35.786256 systemd-timesyncd[1132]: Initial clock synchronization to Sat 2025-05-17 00:41:35.785939 UTC. May 17 00:41:35.800425 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:41:35.800632 systemd[1]: Finished motdgen.service. May 17 00:41:35.838415 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:35.838456 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:35.846778 dbus-daemon[1167]: [system] SELinux support is enabled May 17 00:41:35.846997 systemd[1]: Started dbus.service. May 17 00:41:35.849606 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:41:35.849646 systemd[1]: Reached target system-config.target. May 17 00:41:35.850227 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:41:35.850277 systemd[1]: Reached target user-config.target. May 17 00:41:35.853676 extend-filesystems[1171]: Found loop1 May 17 00:41:35.854610 extend-filesystems[1171]: Found vda May 17 00:41:35.854610 extend-filesystems[1171]: Found vda1 May 17 00:41:35.854610 extend-filesystems[1171]: Found vda2 May 17 00:41:35.854610 extend-filesystems[1171]: Found vda3 May 17 00:41:35.854610 extend-filesystems[1171]: Found usr May 17 00:41:35.854610 extend-filesystems[1171]: Found vda4 May 17 00:41:35.854610 extend-filesystems[1171]: Found vda6 May 17 00:41:35.854610 extend-filesystems[1171]: Found vda7 May 17 00:41:35.854610 extend-filesystems[1171]: Found vda9 May 17 00:41:35.854610 extend-filesystems[1171]: Checking size of /dev/vda9 May 17 00:41:35.892811 bash[1217]: Updated "/home/core/.ssh/authorized_keys" May 17 00:41:35.895687 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:41:35.910020 update_engine[1186]: I0517 00:41:35.909539 1186 main.cc:92] Flatcar Update Engine starting May 17 00:41:35.914895 systemd[1]: Started update-engine.service. May 17 00:41:35.917279 systemd[1]: Started locksmithd.service. May 17 00:41:35.918332 update_engine[1186]: I0517 00:41:35.918284 1186 update_check_scheduler.cc:74] Next update check in 8m34s May 17 00:41:35.919078 extend-filesystems[1171]: Resized partition /dev/vda9 May 17 00:41:35.923976 extend-filesystems[1222]: resize2fs 1.46.5 (30-Dec-2021) May 17 00:41:35.931905 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks May 17 00:41:36.005911 kernel: EXT4-fs (vda9): resized filesystem to 15121403 May 17 00:41:36.037851 coreos-metadata[1166]: May 17 00:41:36.028 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:41:36.038300 env[1192]: time="2025-05-17T00:41:36.034478377Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:41:36.033891 systemd-logind[1181]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:41:36.038758 extend-filesystems[1222]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 17 00:41:36.038758 extend-filesystems[1222]: old_desc_blocks = 1, new_desc_blocks = 8 May 17 00:41:36.038758 extend-filesystems[1222]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. May 17 00:41:36.033928 systemd-logind[1181]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:41:36.053486 coreos-metadata[1166]: May 17 00:41:36.041 INFO Fetch successful May 17 00:41:36.053528 extend-filesystems[1171]: Resized filesystem in /dev/vda9 May 17 00:41:36.053528 extend-filesystems[1171]: Found vdb May 17 00:41:36.038768 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:41:36.038998 systemd[1]: Finished extend-filesystems.service. May 17 00:41:36.040004 systemd-logind[1181]: New seat seat0. May 17 00:41:36.054062 systemd[1]: Started systemd-logind.service. May 17 00:41:36.056823 unknown[1166]: wrote ssh authorized keys file for user: core May 17 00:41:36.079264 update-ssh-keys[1227]: Updated "/home/core/.ssh/authorized_keys" May 17 00:41:36.079584 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 17 00:41:36.130985 env[1192]: time="2025-05-17T00:41:36.130750842Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:41:36.130985 env[1192]: time="2025-05-17T00:41:36.130938611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:41:36.143742 env[1192]: time="2025-05-17T00:41:36.143392931Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:41:36.143742 env[1192]: time="2025-05-17T00:41:36.143442193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:41:36.143742 env[1192]: time="2025-05-17T00:41:36.143726714Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:41:36.143742 env[1192]: time="2025-05-17T00:41:36.143746534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:41:36.144068 env[1192]: time="2025-05-17T00:41:36.143759689Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:41:36.144068 env[1192]: time="2025-05-17T00:41:36.143769470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:41:36.144068 env[1192]: time="2025-05-17T00:41:36.143841432Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:41:36.144198 env[1192]: time="2025-05-17T00:41:36.144095308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:41:36.144708 env[1192]: time="2025-05-17T00:41:36.144246318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:41:36.144708 env[1192]: time="2025-05-17T00:41:36.144266429Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:41:36.144708 env[1192]: time="2025-05-17T00:41:36.144313466Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:41:36.144708 env[1192]: time="2025-05-17T00:41:36.144324826Z" level=info msg="metadata content store policy set" policy=shared May 17 00:41:36.156803 env[1192]: time="2025-05-17T00:41:36.155697437Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:41:36.156803 env[1192]: time="2025-05-17T00:41:36.155753270Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:41:36.156803 env[1192]: time="2025-05-17T00:41:36.155767781Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:41:36.156803 env[1192]: time="2025-05-17T00:41:36.155809852Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:41:36.156803 env[1192]: time="2025-05-17T00:41:36.155824370Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:41:36.156803 env[1192]: time="2025-05-17T00:41:36.155845860Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:41:36.156803 env[1192]: time="2025-05-17T00:41:36.155877109Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:41:36.156803 env[1192]: time="2025-05-17T00:41:36.155892075Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:41:36.156803 env[1192]: time="2025-05-17T00:41:36.155904736Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:41:36.156803 env[1192]: time="2025-05-17T00:41:36.155921219Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:41:36.156803 env[1192]: time="2025-05-17T00:41:36.155964432Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:41:36.156803 env[1192]: time="2025-05-17T00:41:36.155981628Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:41:36.156803 env[1192]: time="2025-05-17T00:41:36.156122937Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:41:36.156803 env[1192]: time="2025-05-17T00:41:36.156197244Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:41:36.157315 env[1192]: time="2025-05-17T00:41:36.156432286Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:41:36.157315 env[1192]: time="2025-05-17T00:41:36.156458352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:41:36.157315 env[1192]: time="2025-05-17T00:41:36.156476749Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:41:36.157315 env[1192]: time="2025-05-17T00:41:36.156521466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:41:36.157315 env[1192]: time="2025-05-17T00:41:36.156534891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:41:36.157315 env[1192]: time="2025-05-17T00:41:36.156546717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:41:36.157315 env[1192]: time="2025-05-17T00:41:36.156557715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:41:36.157315 env[1192]: time="2025-05-17T00:41:36.156569643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:41:36.157315 env[1192]: time="2025-05-17T00:41:36.156581330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:41:36.157315 env[1192]: time="2025-05-17T00:41:36.156593527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:41:36.157315 env[1192]: time="2025-05-17T00:41:36.156604556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:41:36.157315 env[1192]: time="2025-05-17T00:41:36.156627932Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:41:36.157315 env[1192]: time="2025-05-17T00:41:36.156763382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:41:36.157315 env[1192]: time="2025-05-17T00:41:36.156789777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:41:36.157315 env[1192]: time="2025-05-17T00:41:36.156807029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:41:36.157676 env[1192]: time="2025-05-17T00:41:36.156826734Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:41:36.157676 env[1192]: time="2025-05-17T00:41:36.156848430Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:41:36.157676 env[1192]: time="2025-05-17T00:41:36.156884990Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:41:36.157676 env[1192]: time="2025-05-17T00:41:36.156912772Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:41:36.157676 env[1192]: time="2025-05-17T00:41:36.156962385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:41:36.157801 env[1192]: time="2025-05-17T00:41:36.157174700Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:41:36.157801 env[1192]: time="2025-05-17T00:41:36.157246757Z" level=info msg="Connect containerd service" May 17 00:41:36.157801 env[1192]: time="2025-05-17T00:41:36.157289818Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:41:36.162466 env[1192]: time="2025-05-17T00:41:36.159192890Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:41:36.162466 env[1192]: time="2025-05-17T00:41:36.159488668Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:41:36.162466 env[1192]: time="2025-05-17T00:41:36.159534158Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:41:36.159707 systemd[1]: Started containerd.service. May 17 00:41:36.168977 env[1192]: time="2025-05-17T00:41:36.168911165Z" level=info msg="containerd successfully booted in 0.197527s" May 17 00:41:36.172364 env[1192]: time="2025-05-17T00:41:36.172291181Z" level=info msg="Start subscribing containerd event" May 17 00:41:36.172564 env[1192]: time="2025-05-17T00:41:36.172546547Z" level=info msg="Start recovering state" May 17 00:41:36.172737 env[1192]: time="2025-05-17T00:41:36.172722261Z" level=info msg="Start event monitor" May 17 00:41:36.173437 env[1192]: time="2025-05-17T00:41:36.173411142Z" level=info msg="Start snapshots syncer" May 17 00:41:36.173552 env[1192]: time="2025-05-17T00:41:36.173537907Z" level=info msg="Start cni network conf syncer for default" May 17 00:41:36.173624 env[1192]: time="2025-05-17T00:41:36.173611616Z" level=info msg="Start streaming server" May 17 00:41:36.376789 locksmithd[1221]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:41:36.996191 tar[1191]: linux-amd64/README.md May 17 00:41:37.002560 systemd[1]: Finished prepare-helm.service. May 17 00:41:37.286261 sshd_keygen[1188]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:41:37.316154 systemd[1]: Finished sshd-keygen.service. May 17 00:41:37.321062 systemd[1]: Starting issuegen.service... May 17 00:41:37.339632 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:41:37.340051 systemd[1]: Finished issuegen.service. May 17 00:41:37.344801 systemd[1]: Starting systemd-user-sessions.service... May 17 00:41:37.355244 systemd[1]: Finished systemd-user-sessions.service. May 17 00:41:37.360040 systemd[1]: Started getty@tty1.service. May 17 00:41:37.365315 systemd[1]: Started serial-getty@ttyS0.service. May 17 00:41:37.367430 systemd[1]: Reached target getty.target. May 17 00:41:37.572830 systemd[1]: Started kubelet.service. May 17 00:41:37.574554 systemd[1]: Reached target multi-user.target. May 17 00:41:37.577737 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:41:37.592271 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:41:37.592516 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:41:37.593267 systemd[1]: Startup finished in 884ms (kernel) + 5.394s (initrd) + 7.475s (userspace) = 13.754s. May 17 00:41:38.312835 kubelet[1251]: E0517 00:41:38.312730 1251 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:41:38.315003 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:41:38.315169 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:41:38.315453 systemd[1]: kubelet.service: Consumed 1.458s CPU time. May 17 00:41:38.528148 systemd[1]: Created slice system-sshd.slice. May 17 00:41:38.529641 systemd[1]: Started sshd@0-64.23.148.252:22-147.75.109.163:44202.service. May 17 00:41:38.598987 sshd[1258]: Accepted publickey for core from 147.75.109.163 port 44202 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:41:38.604594 sshd[1258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:38.620885 systemd[1]: Created slice user-500.slice. May 17 00:41:38.623255 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:41:38.629234 systemd-logind[1181]: New session 1 of user core. May 17 00:41:38.638518 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:41:38.641101 systemd[1]: Starting user@500.service... May 17 00:41:38.648104 (systemd)[1261]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:38.774170 systemd[1261]: Queued start job for default target default.target. May 17 00:41:38.776009 systemd[1261]: Reached target paths.target. May 17 00:41:38.776223 systemd[1261]: Reached target sockets.target. May 17 00:41:38.776356 systemd[1261]: Reached target timers.target. May 17 00:41:38.776474 systemd[1261]: Reached target basic.target. May 17 00:41:38.776643 systemd[1261]: Reached target default.target. May 17 00:41:38.776767 systemd[1]: Started user@500.service. May 17 00:41:38.777424 systemd[1261]: Startup finished in 118ms. May 17 00:41:38.778689 systemd[1]: Started session-1.scope. May 17 00:41:38.850986 systemd[1]: Started sshd@1-64.23.148.252:22-147.75.109.163:44214.service. May 17 00:41:38.900942 sshd[1270]: Accepted publickey for core from 147.75.109.163 port 44214 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:41:38.901984 sshd[1270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:38.908935 systemd-logind[1181]: New session 2 of user core. May 17 00:41:38.909806 systemd[1]: Started session-2.scope. May 17 00:41:38.977548 sshd[1270]: pam_unix(sshd:session): session closed for user core May 17 00:41:38.983752 systemd[1]: sshd@1-64.23.148.252:22-147.75.109.163:44214.service: Deactivated successfully. May 17 00:41:38.984608 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:41:38.985713 systemd-logind[1181]: Session 2 logged out. Waiting for processes to exit. May 17 00:41:38.987501 systemd[1]: Started sshd@2-64.23.148.252:22-147.75.109.163:44224.service. May 17 00:41:38.989481 systemd-logind[1181]: Removed session 2. May 17 00:41:39.045252 sshd[1276]: Accepted publickey for core from 147.75.109.163 port 44224 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:41:39.047437 sshd[1276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:39.054831 systemd[1]: Started session-3.scope. May 17 00:41:39.055209 systemd-logind[1181]: New session 3 of user core. May 17 00:41:39.118711 sshd[1276]: pam_unix(sshd:session): session closed for user core May 17 00:41:39.124142 systemd[1]: sshd@2-64.23.148.252:22-147.75.109.163:44224.service: Deactivated successfully. May 17 00:41:39.125159 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:41:39.126052 systemd-logind[1181]: Session 3 logged out. Waiting for processes to exit. May 17 00:41:39.127627 systemd[1]: Started sshd@3-64.23.148.252:22-147.75.109.163:44228.service. May 17 00:41:39.129156 systemd-logind[1181]: Removed session 3. May 17 00:41:39.182932 sshd[1282]: Accepted publickey for core from 147.75.109.163 port 44228 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:41:39.185677 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:39.193893 systemd[1]: Started session-4.scope. May 17 00:41:39.194992 systemd-logind[1181]: New session 4 of user core. May 17 00:41:39.262539 sshd[1282]: pam_unix(sshd:session): session closed for user core May 17 00:41:39.268931 systemd[1]: sshd@3-64.23.148.252:22-147.75.109.163:44228.service: Deactivated successfully. May 17 00:41:39.269758 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:41:39.270452 systemd-logind[1181]: Session 4 logged out. Waiting for processes to exit. May 17 00:41:39.272161 systemd[1]: Started sshd@4-64.23.148.252:22-147.75.109.163:44240.service. May 17 00:41:39.273676 systemd-logind[1181]: Removed session 4. May 17 00:41:39.328117 sshd[1288]: Accepted publickey for core from 147.75.109.163 port 44240 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:41:39.330561 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:39.337858 systemd[1]: Started session-5.scope. May 17 00:41:39.339011 systemd-logind[1181]: New session 5 of user core. May 17 00:41:39.413437 sudo[1291]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:41:39.414393 sudo[1291]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:41:39.452969 systemd[1]: Starting docker.service... May 17 00:41:39.525138 env[1301]: time="2025-05-17T00:41:39.525066272Z" level=info msg="Starting up" May 17 00:41:39.527137 env[1301]: time="2025-05-17T00:41:39.527093829Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:41:39.527137 env[1301]: time="2025-05-17T00:41:39.527127652Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:41:39.527318 env[1301]: time="2025-05-17T00:41:39.527154463Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:41:39.527318 env[1301]: time="2025-05-17T00:41:39.527170352Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:41:39.529884 env[1301]: time="2025-05-17T00:41:39.529832793Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:41:39.530026 env[1301]: time="2025-05-17T00:41:39.530008621Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:41:39.530178 env[1301]: time="2025-05-17T00:41:39.530081390Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:41:39.530254 env[1301]: time="2025-05-17T00:41:39.530240251Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:41:39.571395 env[1301]: time="2025-05-17T00:41:39.571351195Z" level=info msg="Loading containers: start." May 17 00:41:39.733890 kernel: Initializing XFRM netlink socket May 17 00:41:39.781115 env[1301]: time="2025-05-17T00:41:39.781061835Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 00:41:39.872818 systemd-networkd[1003]: docker0: Link UP May 17 00:41:39.887954 env[1301]: time="2025-05-17T00:41:39.887913229Z" level=info msg="Loading containers: done." May 17 00:41:39.905009 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3895493025-merged.mount: Deactivated successfully. May 17 00:41:39.906989 env[1301]: time="2025-05-17T00:41:39.906927557Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:41:39.907287 env[1301]: time="2025-05-17T00:41:39.907253984Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 00:41:39.907408 env[1301]: time="2025-05-17T00:41:39.907392095Z" level=info msg="Daemon has completed initialization" May 17 00:41:39.921423 systemd[1]: Started docker.service. May 17 00:41:39.932464 env[1301]: time="2025-05-17T00:41:39.932371592Z" level=info msg="API listen on /run/docker.sock" May 17 00:41:39.958803 systemd[1]: Starting coreos-metadata.service... May 17 00:41:40.009462 coreos-metadata[1418]: May 17 00:41:40.009 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:41:40.021819 coreos-metadata[1418]: May 17 00:41:40.021 INFO Fetch successful May 17 00:41:40.036259 systemd[1]: Finished coreos-metadata.service. May 17 00:41:41.116692 env[1192]: time="2025-05-17T00:41:41.116626403Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 17 00:41:41.684432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4062121057.mount: Deactivated successfully. May 17 00:41:43.471455 env[1192]: time="2025-05-17T00:41:43.471390848Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:43.473324 env[1192]: time="2025-05-17T00:41:43.473260839Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:43.476614 env[1192]: time="2025-05-17T00:41:43.476555684Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:43.479247 env[1192]: time="2025-05-17T00:41:43.479187023Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:43.480523 env[1192]: time="2025-05-17T00:41:43.480468354Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 17 00:41:43.481782 env[1192]: time="2025-05-17T00:41:43.481741422Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 17 00:41:45.164981 env[1192]: time="2025-05-17T00:41:45.164898829Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:45.166638 env[1192]: time="2025-05-17T00:41:45.166591666Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:45.168707 env[1192]: time="2025-05-17T00:41:45.168664351Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:45.171027 env[1192]: time="2025-05-17T00:41:45.170983646Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:45.172143 env[1192]: time="2025-05-17T00:41:45.172092699Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 17 00:41:45.172998 env[1192]: time="2025-05-17T00:41:45.172947790Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 17 00:41:46.582321 env[1192]: time="2025-05-17T00:41:46.582266093Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:46.583953 env[1192]: time="2025-05-17T00:41:46.583857001Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:46.586384 env[1192]: time="2025-05-17T00:41:46.586336873Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:46.588206 env[1192]: time="2025-05-17T00:41:46.588161958Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:46.589613 env[1192]: time="2025-05-17T00:41:46.589545926Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 17 00:41:46.590555 env[1192]: time="2025-05-17T00:41:46.590521882Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 17 00:41:47.716971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2732499089.mount: Deactivated successfully. May 17 00:41:48.484141 env[1192]: time="2025-05-17T00:41:48.484080653Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:48.485992 env[1192]: time="2025-05-17T00:41:48.485936505Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:48.487039 env[1192]: time="2025-05-17T00:41:48.486995256Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:48.488390 env[1192]: time="2025-05-17T00:41:48.488341491Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:48.488972 env[1192]: time="2025-05-17T00:41:48.488926838Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 17 00:41:48.489931 env[1192]: time="2025-05-17T00:41:48.489896915Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:41:48.566185 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:41:48.566390 systemd[1]: Stopped kubelet.service. May 17 00:41:48.566452 systemd[1]: kubelet.service: Consumed 1.458s CPU time. May 17 00:41:48.568529 systemd[1]: Starting kubelet.service... May 17 00:41:48.695921 systemd[1]: Started kubelet.service. May 17 00:41:48.769326 kubelet[1441]: E0517 00:41:48.769166 1441 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:41:48.774059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:41:48.774202 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:41:48.989353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3034560981.mount: Deactivated successfully. May 17 00:41:50.008817 env[1192]: time="2025-05-17T00:41:50.008707693Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:50.010677 env[1192]: time="2025-05-17T00:41:50.010622834Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:50.012976 env[1192]: time="2025-05-17T00:41:50.012927561Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:50.015441 env[1192]: time="2025-05-17T00:41:50.015392206Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:50.016780 env[1192]: time="2025-05-17T00:41:50.016652927Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:41:50.017570 env[1192]: time="2025-05-17T00:41:50.017539246Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:41:50.468339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2817266516.mount: Deactivated successfully. May 17 00:41:50.478404 env[1192]: time="2025-05-17T00:41:50.478330497Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:50.479628 env[1192]: time="2025-05-17T00:41:50.479581406Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:50.480895 env[1192]: time="2025-05-17T00:41:50.480828871Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:50.482423 env[1192]: time="2025-05-17T00:41:50.482386764Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:50.483164 env[1192]: time="2025-05-17T00:41:50.483128440Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:41:50.483968 env[1192]: time="2025-05-17T00:41:50.483940238Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 17 00:41:50.942298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4207206884.mount: Deactivated successfully. May 17 00:41:53.352677 env[1192]: time="2025-05-17T00:41:53.352601713Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:53.356101 env[1192]: time="2025-05-17T00:41:53.356029706Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:53.359035 env[1192]: time="2025-05-17T00:41:53.358992982Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:53.361589 env[1192]: time="2025-05-17T00:41:53.361545666Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:53.362798 env[1192]: time="2025-05-17T00:41:53.362747623Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 17 00:41:56.459245 systemd[1]: Stopped kubelet.service. May 17 00:41:56.462054 systemd[1]: Starting kubelet.service... May 17 00:41:56.500834 systemd[1]: Reloading. May 17 00:41:56.622988 /usr/lib/systemd/system-generators/torcx-generator[1490]: time="2025-05-17T00:41:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:41:56.623425 /usr/lib/systemd/system-generators/torcx-generator[1490]: time="2025-05-17T00:41:56Z" level=info msg="torcx already run" May 17 00:41:56.740302 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:41:56.740536 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:41:56.762689 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:41:56.868115 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:41:56.868505 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:41:56.868996 systemd[1]: Stopped kubelet.service. May 17 00:41:56.871488 systemd[1]: Starting kubelet.service... May 17 00:41:56.991393 systemd[1]: Started kubelet.service. May 17 00:41:57.052574 kubelet[1541]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:41:57.053035 kubelet[1541]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:41:57.053163 kubelet[1541]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:41:57.053352 kubelet[1541]: I0517 00:41:57.053320 1541 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:41:57.624495 kubelet[1541]: I0517 00:41:57.624440 1541 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:41:57.624700 kubelet[1541]: I0517 00:41:57.624684 1541 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:41:57.625096 kubelet[1541]: I0517 00:41:57.625077 1541 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:41:57.660580 kubelet[1541]: E0517 00:41:57.660525 1541 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.23.148.252:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.148.252:6443: connect: connection refused" logger="UnhandledError" May 17 00:41:57.662726 kubelet[1541]: I0517 00:41:57.662673 1541 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:41:57.674797 kubelet[1541]: E0517 00:41:57.674699 1541 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:41:57.674797 kubelet[1541]: I0517 00:41:57.674781 1541 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:41:57.678379 kubelet[1541]: I0517 00:41:57.678343 1541 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:41:57.678616 kubelet[1541]: I0517 00:41:57.678576 1541 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:41:57.678782 kubelet[1541]: I0517 00:41:57.678614 1541 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-8f6d6c1823","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:41:57.678916 kubelet[1541]: I0517 00:41:57.678790 1541 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:41:57.678916 kubelet[1541]: I0517 00:41:57.678800 1541 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:41:57.678984 kubelet[1541]: I0517 00:41:57.678940 1541 state_mem.go:36] "Initialized new in-memory state store" May 17 00:41:57.682673 kubelet[1541]: I0517 00:41:57.682631 1541 kubelet.go:446] "Attempting to sync node with API server" May 17 00:41:57.682673 kubelet[1541]: I0517 00:41:57.682674 1541 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:41:57.682673 kubelet[1541]: I0517 00:41:57.682699 1541 kubelet.go:352] "Adding apiserver pod source" May 17 00:41:57.682929 kubelet[1541]: I0517 00:41:57.682712 1541 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:41:57.689946 kubelet[1541]: I0517 00:41:57.689908 1541 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:41:57.690381 kubelet[1541]: I0517 00:41:57.690359 1541 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:41:57.690469 kubelet[1541]: W0517 00:41:57.690428 1541 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:41:57.698034 kubelet[1541]: I0517 00:41:57.697961 1541 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:41:57.698034 kubelet[1541]: I0517 00:41:57.698040 1541 server.go:1287] "Started kubelet" May 17 00:41:57.698900 kubelet[1541]: W0517 00:41:57.698272 1541 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.148.252:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.148.252:6443: connect: connection refused May 17 00:41:57.698900 kubelet[1541]: E0517 00:41:57.698376 1541 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.148.252:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.148.252:6443: connect: connection refused" logger="UnhandledError" May 17 00:41:57.698900 kubelet[1541]: W0517 00:41:57.698462 1541 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.148.252:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-8f6d6c1823&limit=500&resourceVersion=0": dial tcp 64.23.148.252:6443: connect: connection refused May 17 00:41:57.698900 kubelet[1541]: E0517 00:41:57.698490 1541 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.148.252:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-8f6d6c1823&limit=500&resourceVersion=0\": dial tcp 64.23.148.252:6443: connect: connection refused" logger="UnhandledError" May 17 00:41:57.704042 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 00:41:57.704915 kubelet[1541]: I0517 00:41:57.704409 1541 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:41:57.705222 kubelet[1541]: I0517 00:41:57.705202 1541 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:41:57.705491 kubelet[1541]: I0517 00:41:57.705471 1541 server.go:479] "Adding debug handlers to kubelet server" May 17 00:41:57.706429 kubelet[1541]: I0517 00:41:57.706369 1541 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:41:57.706622 kubelet[1541]: I0517 00:41:57.706603 1541 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:41:57.714678 kubelet[1541]: I0517 00:41:57.714644 1541 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:41:57.714884 kubelet[1541]: I0517 00:41:57.714842 1541 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:41:57.716899 kubelet[1541]: E0517 00:41:57.716834 1541 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-8f6d6c1823\" not found" May 17 00:41:57.718914 kubelet[1541]: E0517 00:41:57.718554 1541 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.148.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-8f6d6c1823?timeout=10s\": dial tcp 64.23.148.252:6443: connect: connection refused" interval="200ms" May 17 00:41:57.720057 kubelet[1541]: E0517 00:41:57.718616 1541 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.148.252:6443/api/v1/namespaces/default/events\": dial tcp 64.23.148.252:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-n-8f6d6c1823.184029b624cdc9af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-8f6d6c1823,UID:ci-3510.3.7-n-8f6d6c1823,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-8f6d6c1823,},FirstTimestamp:2025-05-17 00:41:57.698005423 +0000 UTC m=+0.700731560,LastTimestamp:2025-05-17 00:41:57.698005423 +0000 UTC m=+0.700731560,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-8f6d6c1823,}" May 17 00:41:57.721207 kubelet[1541]: I0517 00:41:57.721186 1541 reconciler.go:26] "Reconciler: start to sync state" May 17 00:41:57.721357 kubelet[1541]: I0517 00:41:57.721343 1541 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:41:57.721933 kubelet[1541]: W0517 00:41:57.721895 1541 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.148.252:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.148.252:6443: connect: connection refused May 17 00:41:57.722054 kubelet[1541]: E0517 00:41:57.722034 1541 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.148.252:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.148.252:6443: connect: connection refused" logger="UnhandledError" May 17 00:41:57.722377 kubelet[1541]: E0517 00:41:57.722358 1541 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:41:57.723038 kubelet[1541]: I0517 00:41:57.723022 1541 factory.go:221] Registration of the containerd container factory successfully May 17 00:41:57.723131 kubelet[1541]: I0517 00:41:57.723119 1541 factory.go:221] Registration of the systemd container factory successfully May 17 00:41:57.723273 kubelet[1541]: I0517 00:41:57.723257 1541 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:41:57.740451 kubelet[1541]: I0517 00:41:57.740366 1541 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:41:57.743266 kubelet[1541]: I0517 00:41:57.743217 1541 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:41:57.743266 kubelet[1541]: I0517 00:41:57.743255 1541 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:41:57.743447 kubelet[1541]: I0517 00:41:57.743293 1541 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:41:57.743447 kubelet[1541]: I0517 00:41:57.743300 1541 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:41:57.743447 kubelet[1541]: E0517 00:41:57.743369 1541 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:41:57.753537 kubelet[1541]: W0517 00:41:57.753497 1541 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.148.252:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.148.252:6443: connect: connection refused May 17 00:41:57.753847 kubelet[1541]: E0517 00:41:57.753807 1541 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.148.252:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.148.252:6443: connect: connection refused" logger="UnhandledError" May 17 00:41:57.754436 kubelet[1541]: I0517 00:41:57.754418 1541 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:41:57.754555 kubelet[1541]: I0517 00:41:57.754541 1541 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:41:57.754667 kubelet[1541]: I0517 00:41:57.754655 1541 state_mem.go:36] "Initialized new in-memory state store" May 17 00:41:57.756672 kubelet[1541]: I0517 00:41:57.756642 1541 policy_none.go:49] "None policy: Start" May 17 00:41:57.756672 kubelet[1541]: I0517 00:41:57.756666 1541 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:41:57.756672 kubelet[1541]: I0517 00:41:57.756680 1541 state_mem.go:35] "Initializing new in-memory state store" May 17 00:41:57.762604 systemd[1]: Created slice kubepods.slice. May 17 00:41:57.768407 systemd[1]: Created slice kubepods-burstable.slice. May 17 00:41:57.771387 systemd[1]: Created slice kubepods-besteffort.slice. May 17 00:41:57.779161 kubelet[1541]: I0517 00:41:57.779111 1541 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:41:57.779359 kubelet[1541]: I0517 00:41:57.779290 1541 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:41:57.779359 kubelet[1541]: I0517 00:41:57.779301 1541 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:41:57.780330 kubelet[1541]: I0517 00:41:57.780305 1541 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:41:57.781438 kubelet[1541]: E0517 00:41:57.781399 1541 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:41:57.781695 kubelet[1541]: E0517 00:41:57.781674 1541 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-n-8f6d6c1823\" not found" May 17 00:41:57.852412 systemd[1]: Created slice kubepods-burstable-pod21f70989e7e5d8e5c7a9f7a30d87c368.slice. May 17 00:41:57.860951 kubelet[1541]: E0517 00:41:57.860918 1541 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-8f6d6c1823\" not found" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:41:57.864733 systemd[1]: Created slice kubepods-burstable-podf58f4f44f7fedb5f4f78bf56605b6501.slice. May 17 00:41:57.867840 systemd[1]: Created slice kubepods-burstable-pod945b82000866647f4f95378a63f7621c.slice. May 17 00:41:57.870131 kubelet[1541]: E0517 00:41:57.870068 1541 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-8f6d6c1823\" not found" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:41:57.871431 kubelet[1541]: E0517 00:41:57.871392 1541 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-8f6d6c1823\" not found" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:41:57.881053 kubelet[1541]: I0517 00:41:57.880928 1541 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:41:57.881759 kubelet[1541]: E0517 00:41:57.881728 1541 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.148.252:6443/api/v1/nodes\": dial tcp 64.23.148.252:6443: connect: connection refused" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:41:57.919479 kubelet[1541]: E0517 00:41:57.919431 1541 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.148.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-8f6d6c1823?timeout=10s\": dial tcp 64.23.148.252:6443: connect: connection refused" interval="400ms" May 17 00:41:58.022598 kubelet[1541]: I0517 00:41:58.022535 1541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f58f4f44f7fedb5f4f78bf56605b6501-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-8f6d6c1823\" (UID: \"f58f4f44f7fedb5f4f78bf56605b6501\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8f6d6c1823" May 17 00:41:58.022598 kubelet[1541]: I0517 00:41:58.022595 1541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f58f4f44f7fedb5f4f78bf56605b6501-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-8f6d6c1823\" (UID: \"f58f4f44f7fedb5f4f78bf56605b6501\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8f6d6c1823" May 17 00:41:58.022598 kubelet[1541]: I0517 00:41:58.022614 1541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f58f4f44f7fedb5f4f78bf56605b6501-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-8f6d6c1823\" (UID: \"f58f4f44f7fedb5f4f78bf56605b6501\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8f6d6c1823" May 17 00:41:58.022859 kubelet[1541]: I0517 00:41:58.022630 1541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f58f4f44f7fedb5f4f78bf56605b6501-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-8f6d6c1823\" (UID: \"f58f4f44f7fedb5f4f78bf56605b6501\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8f6d6c1823" May 17 00:41:58.022859 kubelet[1541]: I0517 00:41:58.022648 1541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f58f4f44f7fedb5f4f78bf56605b6501-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-8f6d6c1823\" (UID: \"f58f4f44f7fedb5f4f78bf56605b6501\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8f6d6c1823" May 17 00:41:58.022859 kubelet[1541]: I0517 00:41:58.022665 1541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/945b82000866647f4f95378a63f7621c-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-8f6d6c1823\" (UID: \"945b82000866647f4f95378a63f7621c\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-8f6d6c1823" May 17 00:41:58.022859 kubelet[1541]: I0517 00:41:58.022681 1541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21f70989e7e5d8e5c7a9f7a30d87c368-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-8f6d6c1823\" (UID: \"21f70989e7e5d8e5c7a9f7a30d87c368\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-8f6d6c1823" May 17 00:41:58.022859 kubelet[1541]: I0517 00:41:58.022725 1541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21f70989e7e5d8e5c7a9f7a30d87c368-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-8f6d6c1823\" (UID: \"21f70989e7e5d8e5c7a9f7a30d87c368\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-8f6d6c1823" May 17 00:41:58.023034 kubelet[1541]: I0517 00:41:58.022749 1541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21f70989e7e5d8e5c7a9f7a30d87c368-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-8f6d6c1823\" (UID: \"21f70989e7e5d8e5c7a9f7a30d87c368\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-8f6d6c1823" May 17 00:41:58.083232 kubelet[1541]: I0517 00:41:58.083200 1541 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:41:58.084139 kubelet[1541]: E0517 00:41:58.084103 1541 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.148.252:6443/api/v1/nodes\": dial tcp 64.23.148.252:6443: connect: connection refused" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:41:58.162576 kubelet[1541]: E0517 00:41:58.162444 1541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:41:58.164095 env[1192]: time="2025-05-17T00:41:58.164046139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-8f6d6c1823,Uid:21f70989e7e5d8e5c7a9f7a30d87c368,Namespace:kube-system,Attempt:0,}" May 17 00:41:58.172136 kubelet[1541]: E0517 00:41:58.172070 1541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:41:58.172897 kubelet[1541]: E0517 00:41:58.172419 1541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:41:58.173049 env[1192]: time="2025-05-17T00:41:58.172653830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-8f6d6c1823,Uid:f58f4f44f7fedb5f4f78bf56605b6501,Namespace:kube-system,Attempt:0,}" May 17 00:41:58.173549 env[1192]: time="2025-05-17T00:41:58.173372867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-8f6d6c1823,Uid:945b82000866647f4f95378a63f7621c,Namespace:kube-system,Attempt:0,}" May 17 00:41:58.321158 kubelet[1541]: E0517 00:41:58.321093 1541 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.148.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-8f6d6c1823?timeout=10s\": dial tcp 64.23.148.252:6443: connect: connection refused" interval="800ms" May 17 00:41:58.486177 kubelet[1541]: I0517 00:41:58.485526 1541 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:41:58.486177 kubelet[1541]: E0517 00:41:58.486070 1541 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.148.252:6443/api/v1/nodes\": dial tcp 64.23.148.252:6443: connect: connection refused" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:41:58.616479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3445130707.mount: Deactivated successfully. May 17 00:41:58.621530 env[1192]: time="2025-05-17T00:41:58.621477257Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:58.622690 env[1192]: time="2025-05-17T00:41:58.622647837Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:58.624170 env[1192]: time="2025-05-17T00:41:58.624108608Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:58.627844 env[1192]: time="2025-05-17T00:41:58.626716711Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:58.629107 env[1192]: time="2025-05-17T00:41:58.629063094Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:58.631233 kubelet[1541]: W0517 00:41:58.631172 1541 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.148.252:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.148.252:6443: connect: connection refused May 17 00:41:58.631360 kubelet[1541]: E0517 00:41:58.631250 1541 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.148.252:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.148.252:6443: connect: connection refused" logger="UnhandledError" May 17 00:41:58.633183 env[1192]: time="2025-05-17T00:41:58.633136895Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:58.636080 env[1192]: time="2025-05-17T00:41:58.636025327Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:58.639208 env[1192]: time="2025-05-17T00:41:58.639161206Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:58.641053 env[1192]: time="2025-05-17T00:41:58.641007601Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:58.642316 env[1192]: time="2025-05-17T00:41:58.642272886Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:58.643563 env[1192]: time="2025-05-17T00:41:58.643523266Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:58.649539 env[1192]: time="2025-05-17T00:41:58.647890672Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:58.676802 env[1192]: time="2025-05-17T00:41:58.676722005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:41:58.677045 env[1192]: time="2025-05-17T00:41:58.677015341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:41:58.677158 env[1192]: time="2025-05-17T00:41:58.677128762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:41:58.677477 env[1192]: time="2025-05-17T00:41:58.677383962Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/00ee566a1564ace2b0892aacc0ab11444c954f4ebcb2c9f0d76e583a22109d7b pid=1585 runtime=io.containerd.runc.v2 May 17 00:41:58.679937 env[1192]: time="2025-05-17T00:41:58.677732594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:41:58.679937 env[1192]: time="2025-05-17T00:41:58.677800699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:41:58.679937 env[1192]: time="2025-05-17T00:41:58.677811712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:41:58.679937 env[1192]: time="2025-05-17T00:41:58.678004473Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53d87935f416b8f735bdb1196d53bdc39120384e71f35d5faf8e754ee250baef pid=1601 runtime=io.containerd.runc.v2 May 17 00:41:58.690513 env[1192]: time="2025-05-17T00:41:58.690400657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:41:58.690832 env[1192]: time="2025-05-17T00:41:58.690777541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:41:58.691008 env[1192]: time="2025-05-17T00:41:58.690980083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:41:58.691372 env[1192]: time="2025-05-17T00:41:58.691316149Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/998fc2b1afef913bf8a5e1f4ce144e9ce5806c6ee094cbd6e78c2c06e684982d pid=1606 runtime=io.containerd.runc.v2 May 17 00:41:58.718959 systemd[1]: Started cri-containerd-00ee566a1564ace2b0892aacc0ab11444c954f4ebcb2c9f0d76e583a22109d7b.scope. May 17 00:41:58.721122 systemd[1]: Started cri-containerd-53d87935f416b8f735bdb1196d53bdc39120384e71f35d5faf8e754ee250baef.scope. May 17 00:41:58.734092 kubelet[1541]: W0517 00:41:58.727618 1541 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.148.252:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.148.252:6443: connect: connection refused May 17 00:41:58.734092 kubelet[1541]: E0517 00:41:58.727910 1541 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.148.252:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.148.252:6443: connect: connection refused" logger="UnhandledError" May 17 00:41:58.731434 systemd[1]: Started cri-containerd-998fc2b1afef913bf8a5e1f4ce144e9ce5806c6ee094cbd6e78c2c06e684982d.scope. May 17 00:41:58.802835 env[1192]: time="2025-05-17T00:41:58.802781357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-8f6d6c1823,Uid:21f70989e7e5d8e5c7a9f7a30d87c368,Namespace:kube-system,Attempt:0,} returns sandbox id \"53d87935f416b8f735bdb1196d53bdc39120384e71f35d5faf8e754ee250baef\"" May 17 00:41:58.804102 kubelet[1541]: E0517 00:41:58.804068 1541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:41:58.806241 env[1192]: time="2025-05-17T00:41:58.806191121Z" level=info msg="CreateContainer within sandbox \"53d87935f416b8f735bdb1196d53bdc39120384e71f35d5faf8e754ee250baef\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:41:58.827372 env[1192]: time="2025-05-17T00:41:58.827000528Z" level=info msg="CreateContainer within sandbox \"53d87935f416b8f735bdb1196d53bdc39120384e71f35d5faf8e754ee250baef\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a22572d6c4836b885af61bc8216888d255c8a21e0af39f4e5837ba5e7f75a668\"" May 17 00:41:58.827851 env[1192]: time="2025-05-17T00:41:58.827811858Z" level=info msg="StartContainer for \"a22572d6c4836b885af61bc8216888d255c8a21e0af39f4e5837ba5e7f75a668\"" May 17 00:41:58.830254 env[1192]: time="2025-05-17T00:41:58.830218582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-8f6d6c1823,Uid:f58f4f44f7fedb5f4f78bf56605b6501,Namespace:kube-system,Attempt:0,} returns sandbox id \"00ee566a1564ace2b0892aacc0ab11444c954f4ebcb2c9f0d76e583a22109d7b\"" May 17 00:41:58.831067 kubelet[1541]: E0517 00:41:58.831033 1541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:41:58.832619 env[1192]: time="2025-05-17T00:41:58.832585142Z" level=info msg="CreateContainer within sandbox \"00ee566a1564ace2b0892aacc0ab11444c954f4ebcb2c9f0d76e583a22109d7b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:41:58.845950 env[1192]: time="2025-05-17T00:41:58.845900334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-8f6d6c1823,Uid:945b82000866647f4f95378a63f7621c,Namespace:kube-system,Attempt:0,} returns sandbox id \"998fc2b1afef913bf8a5e1f4ce144e9ce5806c6ee094cbd6e78c2c06e684982d\"" May 17 00:41:58.847003 kubelet[1541]: E0517 00:41:58.846973 1541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:41:58.848814 env[1192]: time="2025-05-17T00:41:58.848777216Z" level=info msg="CreateContainer within sandbox \"998fc2b1afef913bf8a5e1f4ce144e9ce5806c6ee094cbd6e78c2c06e684982d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:41:58.849349 env[1192]: time="2025-05-17T00:41:58.849319476Z" level=info msg="CreateContainer within sandbox \"00ee566a1564ace2b0892aacc0ab11444c954f4ebcb2c9f0d76e583a22109d7b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0d6fd684fec0ae9c3cf8417c1d2ce9de2920824a60ded1511926a5dbebac0a54\"" May 17 00:41:58.849875 env[1192]: time="2025-05-17T00:41:58.849834615Z" level=info msg="StartContainer for \"0d6fd684fec0ae9c3cf8417c1d2ce9de2920824a60ded1511926a5dbebac0a54\"" May 17 00:41:58.869746 systemd[1]: Started cri-containerd-a22572d6c4836b885af61bc8216888d255c8a21e0af39f4e5837ba5e7f75a668.scope. May 17 00:41:58.877465 env[1192]: time="2025-05-17T00:41:58.877365108Z" level=info msg="CreateContainer within sandbox \"998fc2b1afef913bf8a5e1f4ce144e9ce5806c6ee094cbd6e78c2c06e684982d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3a176c685a6516fc61a2f2f3ca16bbbcd33277290bbe1aac8a17c3d320cb0f10\"" May 17 00:41:58.878678 env[1192]: time="2025-05-17T00:41:58.878569201Z" level=info msg="StartContainer for \"3a176c685a6516fc61a2f2f3ca16bbbcd33277290bbe1aac8a17c3d320cb0f10\"" May 17 00:41:58.895597 systemd[1]: Started cri-containerd-0d6fd684fec0ae9c3cf8417c1d2ce9de2920824a60ded1511926a5dbebac0a54.scope. May 17 00:41:58.944741 systemd[1]: Started cri-containerd-3a176c685a6516fc61a2f2f3ca16bbbcd33277290bbe1aac8a17c3d320cb0f10.scope. May 17 00:41:58.948281 env[1192]: time="2025-05-17T00:41:58.947816175Z" level=info msg="StartContainer for \"a22572d6c4836b885af61bc8216888d255c8a21e0af39f4e5837ba5e7f75a668\" returns successfully" May 17 00:41:59.002078 env[1192]: time="2025-05-17T00:41:59.002012469Z" level=info msg="StartContainer for \"0d6fd684fec0ae9c3cf8417c1d2ce9de2920824a60ded1511926a5dbebac0a54\" returns successfully" May 17 00:41:59.037308 env[1192]: time="2025-05-17T00:41:59.037243653Z" level=info msg="StartContainer for \"3a176c685a6516fc61a2f2f3ca16bbbcd33277290bbe1aac8a17c3d320cb0f10\" returns successfully" May 17 00:41:59.102857 kubelet[1541]: W0517 00:41:59.102555 1541 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.148.252:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-8f6d6c1823&limit=500&resourceVersion=0": dial tcp 64.23.148.252:6443: connect: connection refused May 17 00:41:59.102857 kubelet[1541]: E0517 00:41:59.102683 1541 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.148.252:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-8f6d6c1823&limit=500&resourceVersion=0\": dial tcp 64.23.148.252:6443: connect: connection refused" logger="UnhandledError" May 17 00:41:59.121793 kubelet[1541]: E0517 00:41:59.121737 1541 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.148.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-8f6d6c1823?timeout=10s\": dial tcp 64.23.148.252:6443: connect: connection refused" interval="1.6s" May 17 00:41:59.287750 kubelet[1541]: I0517 00:41:59.287677 1541 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:41:59.288550 kubelet[1541]: E0517 00:41:59.288492 1541 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.148.252:6443/api/v1/nodes\": dial tcp 64.23.148.252:6443: connect: connection refused" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:41:59.323781 kubelet[1541]: W0517 00:41:59.323706 1541 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.148.252:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.148.252:6443: connect: connection refused May 17 00:41:59.324084 kubelet[1541]: E0517 00:41:59.324047 1541 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.148.252:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.148.252:6443: connect: connection refused" logger="UnhandledError" May 17 00:41:59.761171 kubelet[1541]: E0517 00:41:59.761129 1541 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-8f6d6c1823\" not found" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:41:59.761561 kubelet[1541]: E0517 00:41:59.761256 1541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:41:59.763847 kubelet[1541]: E0517 00:41:59.763807 1541 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-8f6d6c1823\" not found" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:41:59.764031 kubelet[1541]: E0517 00:41:59.763975 1541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:41:59.765465 kubelet[1541]: E0517 00:41:59.765409 1541 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-8f6d6c1823\" not found" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:41:59.765831 kubelet[1541]: E0517 00:41:59.765810 1541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:00.767993 kubelet[1541]: E0517 00:42:00.767957 1541 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-8f6d6c1823\" not found" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:42:00.768689 kubelet[1541]: E0517 00:42:00.768652 1541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:00.769328 kubelet[1541]: E0517 00:42:00.769236 1541 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-8f6d6c1823\" not found" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:42:00.769608 kubelet[1541]: E0517 00:42:00.769592 1541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:00.890583 kubelet[1541]: I0517 00:42:00.890541 1541 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:42:01.546154 kubelet[1541]: E0517 00:42:01.546091 1541 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-n-8f6d6c1823\" not found" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:42:01.678231 kubelet[1541]: I0517 00:42:01.678186 1541 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:42:01.691455 kubelet[1541]: I0517 00:42:01.691428 1541 apiserver.go:52] "Watching apiserver" May 17 00:42:01.717736 kubelet[1541]: I0517 00:42:01.717684 1541 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:01.722465 kubelet[1541]: I0517 00:42:01.722422 1541 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:42:01.777275 kubelet[1541]: E0517 00:42:01.777219 1541 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.7-n-8f6d6c1823\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:01.777854 kubelet[1541]: I0517 00:42:01.777273 1541 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:01.780821 kubelet[1541]: E0517 00:42:01.780733 1541 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.7-n-8f6d6c1823\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:01.780821 kubelet[1541]: I0517 00:42:01.780808 1541 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:01.788621 kubelet[1541]: E0517 00:42:01.788548 1541 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.7-n-8f6d6c1823\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:01.855407 kubelet[1541]: I0517 00:42:01.855032 1541 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:01.859399 kubelet[1541]: E0517 00:42:01.859329 1541 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.7-n-8f6d6c1823\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:01.859719 kubelet[1541]: E0517 00:42:01.859669 1541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:03.967426 systemd[1]: Reloading. May 17 00:42:04.053824 kubelet[1541]: I0517 00:42:04.053308 1541 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:04.060693 kubelet[1541]: W0517 00:42:04.059961 1541 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:42:04.060693 kubelet[1541]: E0517 00:42:04.060251 1541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:04.077857 /usr/lib/systemd/system-generators/torcx-generator[1833]: time="2025-05-17T00:42:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:42:04.078381 /usr/lib/systemd/system-generators/torcx-generator[1833]: time="2025-05-17T00:42:04Z" level=info msg="torcx already run" May 17 00:42:04.170769 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:42:04.170794 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:42:04.193184 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:42:04.318579 systemd[1]: Stopping kubelet.service... May 17 00:42:04.340582 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:42:04.341004 systemd[1]: Stopped kubelet.service. May 17 00:42:04.341198 systemd[1]: kubelet.service: Consumed 1.148s CPU time. May 17 00:42:04.344139 systemd[1]: Starting kubelet.service... May 17 00:42:05.532453 systemd[1]: Started kubelet.service. May 17 00:42:05.657792 sudo[1893]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:42:05.658171 sudo[1893]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 17 00:42:05.664493 kubelet[1882]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:42:05.664493 kubelet[1882]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:42:05.664493 kubelet[1882]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:42:05.665138 kubelet[1882]: I0517 00:42:05.664632 1882 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:42:05.678767 kubelet[1882]: I0517 00:42:05.678716 1882 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:42:05.679069 kubelet[1882]: I0517 00:42:05.679049 1882 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:42:05.679606 kubelet[1882]: I0517 00:42:05.679584 1882 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:42:05.684171 kubelet[1882]: I0517 00:42:05.684123 1882 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:42:05.697124 kubelet[1882]: I0517 00:42:05.697070 1882 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:42:05.714976 kubelet[1882]: E0517 00:42:05.714924 1882 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:42:05.715617 kubelet[1882]: I0517 00:42:05.715590 1882 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:42:05.721701 kubelet[1882]: I0517 00:42:05.721650 1882 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:42:05.722790 kubelet[1882]: I0517 00:42:05.722724 1882 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:42:05.723487 kubelet[1882]: I0517 00:42:05.723046 1882 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-8f6d6c1823","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:42:05.723796 kubelet[1882]: I0517 00:42:05.723777 1882 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:42:05.723909 kubelet[1882]: I0517 00:42:05.723898 1882 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:42:05.724080 kubelet[1882]: I0517 00:42:05.724064 1882 state_mem.go:36] "Initialized new in-memory state store" May 17 00:42:05.724472 kubelet[1882]: I0517 00:42:05.724454 1882 kubelet.go:446] "Attempting to sync node with API server" May 17 00:42:05.725658 kubelet[1882]: I0517 00:42:05.725636 1882 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:42:05.733644 kubelet[1882]: I0517 00:42:05.733610 1882 kubelet.go:352] "Adding apiserver pod source" May 17 00:42:05.733880 kubelet[1882]: I0517 00:42:05.733854 1882 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:42:05.736301 kubelet[1882]: I0517 00:42:05.736274 1882 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:42:05.737481 kubelet[1882]: I0517 00:42:05.737450 1882 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:42:05.739548 kubelet[1882]: I0517 00:42:05.739060 1882 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:42:05.739791 kubelet[1882]: I0517 00:42:05.739770 1882 server.go:1287] "Started kubelet" May 17 00:42:05.767952 kubelet[1882]: I0517 00:42:05.766057 1882 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:42:05.767952 kubelet[1882]: I0517 00:42:05.766500 1882 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:42:05.767952 kubelet[1882]: I0517 00:42:05.766566 1882 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:42:05.768971 kubelet[1882]: E0517 00:42:05.768904 1882 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:42:05.769854 kubelet[1882]: I0517 00:42:05.769828 1882 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:42:05.770308 kubelet[1882]: I0517 00:42:05.770283 1882 server.go:479] "Adding debug handlers to kubelet server" May 17 00:42:05.785176 kubelet[1882]: I0517 00:42:05.783248 1882 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:42:05.786620 kubelet[1882]: I0517 00:42:05.786587 1882 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:42:05.789655 kubelet[1882]: I0517 00:42:05.789622 1882 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:42:05.790022 kubelet[1882]: I0517 00:42:05.789990 1882 reconciler.go:26] "Reconciler: start to sync state" May 17 00:42:05.792746 kubelet[1882]: I0517 00:42:05.792694 1882 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:42:05.794605 kubelet[1882]: I0517 00:42:05.794569 1882 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:42:05.795963 kubelet[1882]: I0517 00:42:05.795937 1882 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:42:05.796122 kubelet[1882]: I0517 00:42:05.796109 1882 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:42:05.796201 kubelet[1882]: I0517 00:42:05.796190 1882 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:42:05.796462 kubelet[1882]: E0517 00:42:05.796441 1882 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:42:05.805749 kubelet[1882]: I0517 00:42:05.805703 1882 factory.go:221] Registration of the systemd container factory successfully May 17 00:42:05.806430 kubelet[1882]: I0517 00:42:05.806384 1882 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:42:05.811294 kubelet[1882]: I0517 00:42:05.811266 1882 factory.go:221] Registration of the containerd container factory successfully May 17 00:42:05.900365 kubelet[1882]: E0517 00:42:05.900292 1882 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:42:05.932961 kubelet[1882]: I0517 00:42:05.932311 1882 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:42:05.932961 kubelet[1882]: I0517 00:42:05.932337 1882 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:42:05.932961 kubelet[1882]: I0517 00:42:05.932379 1882 state_mem.go:36] "Initialized new in-memory state store" May 17 00:42:05.932961 kubelet[1882]: I0517 00:42:05.932738 1882 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:42:05.932961 kubelet[1882]: I0517 00:42:05.932793 1882 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:42:05.932961 kubelet[1882]: I0517 00:42:05.932829 1882 policy_none.go:49] "None policy: Start" May 17 00:42:05.932961 kubelet[1882]: I0517 00:42:05.932954 1882 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:42:05.932961 kubelet[1882]: I0517 00:42:05.932976 1882 state_mem.go:35] "Initializing new in-memory state store" May 17 00:42:05.933451 kubelet[1882]: I0517 00:42:05.933253 1882 state_mem.go:75] "Updated machine memory state" May 17 00:42:05.942046 kubelet[1882]: I0517 00:42:05.942010 1882 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:42:05.942391 kubelet[1882]: I0517 00:42:05.942359 1882 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:42:05.942462 kubelet[1882]: I0517 00:42:05.942383 1882 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:42:05.945184 kubelet[1882]: I0517 00:42:05.945150 1882 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:42:05.950342 kubelet[1882]: E0517 00:42:05.950304 1882 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:42:06.049020 kubelet[1882]: I0517 00:42:06.048849 1882 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.061673 kubelet[1882]: I0517 00:42:06.061619 1882 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.061950 kubelet[1882]: I0517 00:42:06.061723 1882 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.104700 kubelet[1882]: I0517 00:42:06.104632 1882 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.104985 kubelet[1882]: I0517 00:42:06.104955 1882 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.105675 kubelet[1882]: I0517 00:42:06.105641 1882 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.113568 kubelet[1882]: W0517 00:42:06.113502 1882 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:42:06.116565 kubelet[1882]: W0517 00:42:06.116485 1882 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:42:06.120147 kubelet[1882]: W0517 00:42:06.119970 1882 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:42:06.120393 kubelet[1882]: E0517 00:42:06.120247 1882 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.7-n-8f6d6c1823\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.193113 kubelet[1882]: I0517 00:42:06.193042 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/945b82000866647f4f95378a63f7621c-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-8f6d6c1823\" (UID: \"945b82000866647f4f95378a63f7621c\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.193571 kubelet[1882]: I0517 00:42:06.193520 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21f70989e7e5d8e5c7a9f7a30d87c368-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-8f6d6c1823\" (UID: \"21f70989e7e5d8e5c7a9f7a30d87c368\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.193780 kubelet[1882]: I0517 00:42:06.193757 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21f70989e7e5d8e5c7a9f7a30d87c368-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-8f6d6c1823\" (UID: \"21f70989e7e5d8e5c7a9f7a30d87c368\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.193965 kubelet[1882]: I0517 00:42:06.193921 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f58f4f44f7fedb5f4f78bf56605b6501-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-8f6d6c1823\" (UID: \"f58f4f44f7fedb5f4f78bf56605b6501\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.194106 kubelet[1882]: I0517 00:42:06.194085 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f58f4f44f7fedb5f4f78bf56605b6501-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-8f6d6c1823\" (UID: \"f58f4f44f7fedb5f4f78bf56605b6501\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.194274 kubelet[1882]: I0517 00:42:06.194251 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f58f4f44f7fedb5f4f78bf56605b6501-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-8f6d6c1823\" (UID: \"f58f4f44f7fedb5f4f78bf56605b6501\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.194393 kubelet[1882]: I0517 00:42:06.194373 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21f70989e7e5d8e5c7a9f7a30d87c368-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-8f6d6c1823\" (UID: \"21f70989e7e5d8e5c7a9f7a30d87c368\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.194531 kubelet[1882]: I0517 00:42:06.194509 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f58f4f44f7fedb5f4f78bf56605b6501-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-8f6d6c1823\" (UID: \"f58f4f44f7fedb5f4f78bf56605b6501\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.194648 kubelet[1882]: I0517 00:42:06.194629 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f58f4f44f7fedb5f4f78bf56605b6501-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-8f6d6c1823\" (UID: \"f58f4f44f7fedb5f4f78bf56605b6501\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.415064 kubelet[1882]: E0517 00:42:06.414945 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:06.418142 kubelet[1882]: E0517 00:42:06.418090 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:06.421637 kubelet[1882]: E0517 00:42:06.421592 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:06.575305 sudo[1893]: pam_unix(sudo:session): session closed for user root May 17 00:42:06.735144 kubelet[1882]: I0517 00:42:06.734949 1882 apiserver.go:52] "Watching apiserver" May 17 00:42:06.790320 kubelet[1882]: I0517 00:42:06.790261 1882 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:42:06.887282 kubelet[1882]: I0517 00:42:06.887249 1882 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.887470 kubelet[1882]: I0517 00:42:06.887453 1882 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.887700 kubelet[1882]: I0517 00:42:06.887673 1882 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.904491 kubelet[1882]: I0517 00:42:06.904396 1882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-n-8f6d6c1823" podStartSLOduration=0.904349969 podStartE2EDuration="904.349969ms" podCreationTimestamp="2025-05-17 00:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:06.860638228 +0000 UTC m=+1.313116971" watchObservedRunningTime="2025-05-17 00:42:06.904349969 +0000 UTC m=+1.356828703" May 17 00:42:06.911010 kubelet[1882]: W0517 00:42:06.910953 1882 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:42:06.911218 kubelet[1882]: E0517 00:42:06.911032 1882 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.7-n-8f6d6c1823\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.911301 kubelet[1882]: E0517 00:42:06.911280 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:06.913426 kubelet[1882]: W0517 00:42:06.913383 1882 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:42:06.913679 kubelet[1882]: E0517 00:42:06.913662 1882 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.7-n-8f6d6c1823\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.914026 kubelet[1882]: E0517 00:42:06.913993 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:06.914333 kubelet[1882]: W0517 00:42:06.914313 1882 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:42:06.914514 kubelet[1882]: E0517 00:42:06.914487 1882 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.7-n-8f6d6c1823\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8f6d6c1823" May 17 00:42:06.914810 kubelet[1882]: E0517 00:42:06.914792 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:06.934765 kubelet[1882]: I0517 00:42:06.934698 1882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-n-8f6d6c1823" podStartSLOduration=0.934665619 podStartE2EDuration="934.665619ms" podCreationTimestamp="2025-05-17 00:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:06.905943123 +0000 UTC m=+1.358421865" watchObservedRunningTime="2025-05-17 00:42:06.934665619 +0000 UTC m=+1.387144361" May 17 00:42:06.952709 kubelet[1882]: I0517 00:42:06.952626 1882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8f6d6c1823" podStartSLOduration=2.95260199 podStartE2EDuration="2.95260199s" podCreationTimestamp="2025-05-17 00:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:06.9360151 +0000 UTC m=+1.388493842" watchObservedRunningTime="2025-05-17 00:42:06.95260199 +0000 UTC m=+1.405080735" May 17 00:42:07.890270 kubelet[1882]: E0517 00:42:07.890226 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:07.891099 kubelet[1882]: E0517 00:42:07.891069 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:07.894208 kubelet[1882]: E0517 00:42:07.894135 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:08.664126 sudo[1291]: pam_unix(sudo:session): session closed for user root May 17 00:42:08.669003 sshd[1288]: pam_unix(sshd:session): session closed for user core May 17 00:42:08.673119 systemd[1]: sshd@4-64.23.148.252:22-147.75.109.163:44240.service: Deactivated successfully. May 17 00:42:08.674555 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:42:08.674764 systemd[1]: session-5.scope: Consumed 5.590s CPU time. May 17 00:42:08.676932 systemd-logind[1181]: Session 5 logged out. Waiting for processes to exit. May 17 00:42:08.678357 systemd-logind[1181]: Removed session 5. May 17 00:42:08.892255 kubelet[1882]: E0517 00:42:08.892200 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:09.362547 kubelet[1882]: I0517 00:42:09.362517 1882 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:42:09.363270 env[1192]: time="2025-05-17T00:42:09.363221116Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:42:09.363938 kubelet[1882]: I0517 00:42:09.363916 1882 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:42:10.112118 systemd[1]: Created slice kubepods-besteffort-pod8a0c437c_c026_4a18_aaea_2b2694df5cfc.slice. May 17 00:42:10.124153 kubelet[1882]: I0517 00:42:10.123232 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8a0c437c-c026-4a18-aaea-2b2694df5cfc-kube-proxy\") pod \"kube-proxy-cwwvh\" (UID: \"8a0c437c-c026-4a18-aaea-2b2694df5cfc\") " pod="kube-system/kube-proxy-cwwvh" May 17 00:42:10.124153 kubelet[1882]: I0517 00:42:10.123283 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a0c437c-c026-4a18-aaea-2b2694df5cfc-xtables-lock\") pod \"kube-proxy-cwwvh\" (UID: \"8a0c437c-c026-4a18-aaea-2b2694df5cfc\") " pod="kube-system/kube-proxy-cwwvh" May 17 00:42:10.124153 kubelet[1882]: I0517 00:42:10.123308 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq9j6\" (UniqueName: \"kubernetes.io/projected/8a0c437c-c026-4a18-aaea-2b2694df5cfc-kube-api-access-bq9j6\") pod \"kube-proxy-cwwvh\" (UID: \"8a0c437c-c026-4a18-aaea-2b2694df5cfc\") " pod="kube-system/kube-proxy-cwwvh" May 17 00:42:10.124153 kubelet[1882]: I0517 00:42:10.123447 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a0c437c-c026-4a18-aaea-2b2694df5cfc-lib-modules\") pod \"kube-proxy-cwwvh\" (UID: \"8a0c437c-c026-4a18-aaea-2b2694df5cfc\") " pod="kube-system/kube-proxy-cwwvh" May 17 00:42:10.123829 systemd[1]: Created slice kubepods-burstable-pod0b4da2ee_22cf_4708_9d35_28a92069bcc3.slice. May 17 00:42:10.224020 kubelet[1882]: I0517 00:42:10.223966 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b4da2ee-22cf-4708-9d35-28a92069bcc3-cilium-config-path\") pod \"cilium-6cqzn\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " pod="kube-system/cilium-6cqzn" May 17 00:42:10.224020 kubelet[1882]: I0517 00:42:10.224015 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-cilium-run\") pod \"cilium-6cqzn\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " pod="kube-system/cilium-6cqzn" May 17 00:42:10.224282 kubelet[1882]: I0517 00:42:10.224041 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-etc-cni-netd\") pod \"cilium-6cqzn\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " pod="kube-system/cilium-6cqzn" May 17 00:42:10.224282 kubelet[1882]: I0517 00:42:10.224061 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-host-proc-sys-net\") pod \"cilium-6cqzn\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " pod="kube-system/cilium-6cqzn" May 17 00:42:10.224282 kubelet[1882]: I0517 00:42:10.224083 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-host-proc-sys-kernel\") pod \"cilium-6cqzn\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " pod="kube-system/cilium-6cqzn" May 17 00:42:10.224282 kubelet[1882]: I0517 00:42:10.224132 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99qzr\" (UniqueName: \"kubernetes.io/projected/0b4da2ee-22cf-4708-9d35-28a92069bcc3-kube-api-access-99qzr\") pod \"cilium-6cqzn\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " pod="kube-system/cilium-6cqzn" May 17 00:42:10.224282 kubelet[1882]: I0517 00:42:10.224153 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-cni-path\") pod \"cilium-6cqzn\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " pod="kube-system/cilium-6cqzn" May 17 00:42:10.224282 kubelet[1882]: I0517 00:42:10.224188 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-cilium-cgroup\") pod \"cilium-6cqzn\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " pod="kube-system/cilium-6cqzn" May 17 00:42:10.224467 kubelet[1882]: I0517 00:42:10.224205 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b4da2ee-22cf-4708-9d35-28a92069bcc3-clustermesh-secrets\") pod \"cilium-6cqzn\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " pod="kube-system/cilium-6cqzn" May 17 00:42:10.224467 kubelet[1882]: I0517 00:42:10.224226 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-xtables-lock\") pod \"cilium-6cqzn\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " pod="kube-system/cilium-6cqzn" May 17 00:42:10.224467 kubelet[1882]: I0517 00:42:10.224248 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b4da2ee-22cf-4708-9d35-28a92069bcc3-hubble-tls\") pod \"cilium-6cqzn\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " pod="kube-system/cilium-6cqzn" May 17 00:42:10.224467 kubelet[1882]: I0517 00:42:10.224277 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-bpf-maps\") pod \"cilium-6cqzn\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " pod="kube-system/cilium-6cqzn" May 17 00:42:10.224602 kubelet[1882]: I0517 00:42:10.224291 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-hostproc\") pod \"cilium-6cqzn\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " pod="kube-system/cilium-6cqzn" May 17 00:42:10.224648 kubelet[1882]: I0517 00:42:10.224627 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-lib-modules\") pod \"cilium-6cqzn\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " pod="kube-system/cilium-6cqzn" May 17 00:42:10.238990 kubelet[1882]: I0517 00:42:10.238945 1882 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:42:10.419918 kubelet[1882]: E0517 00:42:10.419778 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:10.421465 env[1192]: time="2025-05-17T00:42:10.420725152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cwwvh,Uid:8a0c437c-c026-4a18-aaea-2b2694df5cfc,Namespace:kube-system,Attempt:0,}" May 17 00:42:10.430881 kubelet[1882]: E0517 00:42:10.430840 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:10.431785 env[1192]: time="2025-05-17T00:42:10.431746565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6cqzn,Uid:0b4da2ee-22cf-4708-9d35-28a92069bcc3,Namespace:kube-system,Attempt:0,}" May 17 00:42:10.443805 env[1192]: time="2025-05-17T00:42:10.443708132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:10.443985 env[1192]: time="2025-05-17T00:42:10.443808137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:10.443985 env[1192]: time="2025-05-17T00:42:10.443833638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:10.444054 env[1192]: time="2025-05-17T00:42:10.444014227Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b0db74c167143103de6805a6b575518d08437fd0989eb38458a7ccc8363ad6a0 pid=1964 runtime=io.containerd.runc.v2 May 17 00:42:10.461433 env[1192]: time="2025-05-17T00:42:10.461338729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:10.461433 env[1192]: time="2025-05-17T00:42:10.461379764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:10.461433 env[1192]: time="2025-05-17T00:42:10.461390846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:10.462042 env[1192]: time="2025-05-17T00:42:10.461979166Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965 pid=1979 runtime=io.containerd.runc.v2 May 17 00:42:10.475635 systemd[1]: Started cri-containerd-b0db74c167143103de6805a6b575518d08437fd0989eb38458a7ccc8363ad6a0.scope. May 17 00:42:10.498415 systemd[1]: Started cri-containerd-371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965.scope. May 17 00:42:10.530504 systemd[1]: Created slice kubepods-besteffort-podbbdd99d3_5b1f_4179_baf7_f8dc6d49b781.slice. May 17 00:42:10.531923 kubelet[1882]: I0517 00:42:10.531822 1882 status_manager.go:890] "Failed to get status for pod" podUID="bbdd99d3-5b1f-4179-baf7-f8dc6d49b781" pod="kube-system/cilium-operator-6c4d7847fc-lj6tp" err="pods \"cilium-operator-6c4d7847fc-lj6tp\" is forbidden: User \"system:node:ci-3510.3.7-n-8f6d6c1823\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-n-8f6d6c1823' and this object" May 17 00:42:10.564624 env[1192]: time="2025-05-17T00:42:10.564580170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cwwvh,Uid:8a0c437c-c026-4a18-aaea-2b2694df5cfc,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0db74c167143103de6805a6b575518d08437fd0989eb38458a7ccc8363ad6a0\"" May 17 00:42:10.566492 kubelet[1882]: E0517 00:42:10.565961 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:10.570936 env[1192]: time="2025-05-17T00:42:10.570894168Z" level=info msg="CreateContainer within sandbox \"b0db74c167143103de6805a6b575518d08437fd0989eb38458a7ccc8363ad6a0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:42:10.588631 env[1192]: time="2025-05-17T00:42:10.588575332Z" level=info msg="CreateContainer within sandbox \"b0db74c167143103de6805a6b575518d08437fd0989eb38458a7ccc8363ad6a0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d996102e691e3aa9487741e98c3a3fc5239601facc2356a2e88a19134e8fe0e3\"" May 17 00:42:10.589744 env[1192]: time="2025-05-17T00:42:10.589702099Z" level=info msg="StartContainer for \"d996102e691e3aa9487741e98c3a3fc5239601facc2356a2e88a19134e8fe0e3\"" May 17 00:42:10.593703 env[1192]: time="2025-05-17T00:42:10.593566295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6cqzn,Uid:0b4da2ee-22cf-4708-9d35-28a92069bcc3,Namespace:kube-system,Attempt:0,} returns sandbox id \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\"" May 17 00:42:10.594955 kubelet[1882]: E0517 00:42:10.594452 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:10.598360 env[1192]: time="2025-05-17T00:42:10.598314430Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:42:10.618497 systemd[1]: Started cri-containerd-d996102e691e3aa9487741e98c3a3fc5239601facc2356a2e88a19134e8fe0e3.scope. May 17 00:42:10.633091 kubelet[1882]: I0517 00:42:10.632933 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfj7d\" (UniqueName: \"kubernetes.io/projected/bbdd99d3-5b1f-4179-baf7-f8dc6d49b781-kube-api-access-bfj7d\") pod \"cilium-operator-6c4d7847fc-lj6tp\" (UID: \"bbdd99d3-5b1f-4179-baf7-f8dc6d49b781\") " pod="kube-system/cilium-operator-6c4d7847fc-lj6tp" May 17 00:42:10.633091 kubelet[1882]: I0517 00:42:10.633024 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bbdd99d3-5b1f-4179-baf7-f8dc6d49b781-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-lj6tp\" (UID: \"bbdd99d3-5b1f-4179-baf7-f8dc6d49b781\") " pod="kube-system/cilium-operator-6c4d7847fc-lj6tp" May 17 00:42:10.668311 env[1192]: time="2025-05-17T00:42:10.668249586Z" level=info msg="StartContainer for \"d996102e691e3aa9487741e98c3a3fc5239601facc2356a2e88a19134e8fe0e3\" returns successfully" May 17 00:42:10.834435 kubelet[1882]: E0517 00:42:10.834350 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:10.836265 env[1192]: time="2025-05-17T00:42:10.835185302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lj6tp,Uid:bbdd99d3-5b1f-4179-baf7-f8dc6d49b781,Namespace:kube-system,Attempt:0,}" May 17 00:42:10.851475 env[1192]: time="2025-05-17T00:42:10.851361024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:10.851837 env[1192]: time="2025-05-17T00:42:10.851800438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:10.851978 env[1192]: time="2025-05-17T00:42:10.851953119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:10.853723 env[1192]: time="2025-05-17T00:42:10.853664716Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9c5c7f47b350a7546de7f8cef7705e8cac5fb634737a414088b0c2887c7d30e pid=2080 runtime=io.containerd.runc.v2 May 17 00:42:10.868724 systemd[1]: Started cri-containerd-f9c5c7f47b350a7546de7f8cef7705e8cac5fb634737a414088b0c2887c7d30e.scope. May 17 00:42:10.898423 kubelet[1882]: E0517 00:42:10.898382 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:10.913551 kubelet[1882]: I0517 00:42:10.913490 1882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cwwvh" podStartSLOduration=0.913473452 podStartE2EDuration="913.473452ms" podCreationTimestamp="2025-05-17 00:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:10.911988469 +0000 UTC m=+5.364467211" watchObservedRunningTime="2025-05-17 00:42:10.913473452 +0000 UTC m=+5.365952191" May 17 00:42:10.949007 env[1192]: time="2025-05-17T00:42:10.948951981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lj6tp,Uid:bbdd99d3-5b1f-4179-baf7-f8dc6d49b781,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9c5c7f47b350a7546de7f8cef7705e8cac5fb634737a414088b0c2887c7d30e\"" May 17 00:42:10.949957 kubelet[1882]: E0517 00:42:10.949900 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:14.825067 kubelet[1882]: E0517 00:42:14.824791 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:14.910035 kubelet[1882]: E0517 00:42:14.909994 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:16.483347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount201414655.mount: Deactivated successfully. May 17 00:42:16.508021 kubelet[1882]: E0517 00:42:16.507976 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:18.507386 kubelet[1882]: E0517 00:42:18.507334 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:19.665283 env[1192]: time="2025-05-17T00:42:19.665205216Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:19.667382 env[1192]: time="2025-05-17T00:42:19.667332373Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:19.669039 env[1192]: time="2025-05-17T00:42:19.668998917Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:19.670141 env[1192]: time="2025-05-17T00:42:19.670050598Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 17 00:42:19.678115 env[1192]: time="2025-05-17T00:42:19.675238130Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:42:19.681786 env[1192]: time="2025-05-17T00:42:19.681634542Z" level=info msg="CreateContainer within sandbox \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:42:19.724080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4110837565.mount: Deactivated successfully. May 17 00:42:19.733358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651098299.mount: Deactivated successfully. May 17 00:42:19.733979 env[1192]: time="2025-05-17T00:42:19.733438415Z" level=info msg="CreateContainer within sandbox \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d5b9bec9a44f3d3cc71674acc85a5262292188a4e787bf40813083a9f53dd2d5\"" May 17 00:42:19.736280 env[1192]: time="2025-05-17T00:42:19.735838685Z" level=info msg="StartContainer for \"d5b9bec9a44f3d3cc71674acc85a5262292188a4e787bf40813083a9f53dd2d5\"" May 17 00:42:19.766389 systemd[1]: Started cri-containerd-d5b9bec9a44f3d3cc71674acc85a5262292188a4e787bf40813083a9f53dd2d5.scope. May 17 00:42:19.821389 env[1192]: time="2025-05-17T00:42:19.821059496Z" level=info msg="StartContainer for \"d5b9bec9a44f3d3cc71674acc85a5262292188a4e787bf40813083a9f53dd2d5\" returns successfully" May 17 00:42:19.835411 systemd[1]: cri-containerd-d5b9bec9a44f3d3cc71674acc85a5262292188a4e787bf40813083a9f53dd2d5.scope: Deactivated successfully. May 17 00:42:19.866368 env[1192]: time="2025-05-17T00:42:19.866305908Z" level=info msg="shim disconnected" id=d5b9bec9a44f3d3cc71674acc85a5262292188a4e787bf40813083a9f53dd2d5 May 17 00:42:19.866368 env[1192]: time="2025-05-17T00:42:19.866362808Z" level=warning msg="cleaning up after shim disconnected" id=d5b9bec9a44f3d3cc71674acc85a5262292188a4e787bf40813083a9f53dd2d5 namespace=k8s.io May 17 00:42:19.866368 env[1192]: time="2025-05-17T00:42:19.866376046Z" level=info msg="cleaning up dead shim" May 17 00:42:19.878020 env[1192]: time="2025-05-17T00:42:19.877956797Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:42:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2292 runtime=io.containerd.runc.v2\n" May 17 00:42:19.923143 kubelet[1882]: E0517 00:42:19.923015 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:19.928913 env[1192]: time="2025-05-17T00:42:19.928051257Z" level=info msg="CreateContainer within sandbox \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:42:19.942401 env[1192]: time="2025-05-17T00:42:19.942329715Z" level=info msg="CreateContainer within sandbox \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"65dcf16705435ef03b5fd4cfc274df01f66944a4b1c7f5f4886997433cb28265\"" May 17 00:42:19.945464 env[1192]: time="2025-05-17T00:42:19.945411180Z" level=info msg="StartContainer for \"65dcf16705435ef03b5fd4cfc274df01f66944a4b1c7f5f4886997433cb28265\"" May 17 00:42:19.982162 systemd[1]: Started cri-containerd-65dcf16705435ef03b5fd4cfc274df01f66944a4b1c7f5f4886997433cb28265.scope. May 17 00:42:20.028595 env[1192]: time="2025-05-17T00:42:20.028539130Z" level=info msg="StartContainer for \"65dcf16705435ef03b5fd4cfc274df01f66944a4b1c7f5f4886997433cb28265\" returns successfully" May 17 00:42:20.051448 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:42:20.052235 systemd[1]: Stopped systemd-sysctl.service. May 17 00:42:20.052575 systemd[1]: Stopping systemd-sysctl.service... May 17 00:42:20.056344 systemd[1]: Starting systemd-sysctl.service... May 17 00:42:20.056798 systemd[1]: cri-containerd-65dcf16705435ef03b5fd4cfc274df01f66944a4b1c7f5f4886997433cb28265.scope: Deactivated successfully. May 17 00:42:20.080163 systemd[1]: Finished systemd-sysctl.service. May 17 00:42:20.088774 env[1192]: time="2025-05-17T00:42:20.088714612Z" level=info msg="shim disconnected" id=65dcf16705435ef03b5fd4cfc274df01f66944a4b1c7f5f4886997433cb28265 May 17 00:42:20.089097 env[1192]: time="2025-05-17T00:42:20.089071487Z" level=warning msg="cleaning up after shim disconnected" id=65dcf16705435ef03b5fd4cfc274df01f66944a4b1c7f5f4886997433cb28265 namespace=k8s.io May 17 00:42:20.089239 env[1192]: time="2025-05-17T00:42:20.089215812Z" level=info msg="cleaning up dead shim" May 17 00:42:20.102271 env[1192]: time="2025-05-17T00:42:20.102217076Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:42:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2356 runtime=io.containerd.runc.v2\n" May 17 00:42:20.722962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5b9bec9a44f3d3cc71674acc85a5262292188a4e787bf40813083a9f53dd2d5-rootfs.mount: Deactivated successfully. May 17 00:42:20.840510 update_engine[1186]: I0517 00:42:20.840335 1186 update_attempter.cc:509] Updating boot flags... May 17 00:42:20.930770 kubelet[1882]: E0517 00:42:20.930732 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:20.935216 env[1192]: time="2025-05-17T00:42:20.934426618Z" level=info msg="CreateContainer within sandbox \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:42:20.975620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount424846784.mount: Deactivated successfully. May 17 00:42:21.012070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount857737292.mount: Deactivated successfully. May 17 00:42:21.018559 env[1192]: time="2025-05-17T00:42:21.018496161Z" level=info msg="CreateContainer within sandbox \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bdb1b5d191c1a04c3161f152a98084a309b8353effaf530c6a8d6b52bc6ed6af\"" May 17 00:42:21.021667 env[1192]: time="2025-05-17T00:42:21.021597662Z" level=info msg="StartContainer for \"bdb1b5d191c1a04c3161f152a98084a309b8353effaf530c6a8d6b52bc6ed6af\"" May 17 00:42:21.057348 systemd[1]: Started cri-containerd-bdb1b5d191c1a04c3161f152a98084a309b8353effaf530c6a8d6b52bc6ed6af.scope. May 17 00:42:21.108989 systemd[1]: cri-containerd-bdb1b5d191c1a04c3161f152a98084a309b8353effaf530c6a8d6b52bc6ed6af.scope: Deactivated successfully. May 17 00:42:21.110359 env[1192]: time="2025-05-17T00:42:21.110292848Z" level=info msg="StartContainer for \"bdb1b5d191c1a04c3161f152a98084a309b8353effaf530c6a8d6b52bc6ed6af\" returns successfully" May 17 00:42:21.147227 env[1192]: time="2025-05-17T00:42:21.147176985Z" level=info msg="shim disconnected" id=bdb1b5d191c1a04c3161f152a98084a309b8353effaf530c6a8d6b52bc6ed6af May 17 00:42:21.147521 env[1192]: time="2025-05-17T00:42:21.147494708Z" level=warning msg="cleaning up after shim disconnected" id=bdb1b5d191c1a04c3161f152a98084a309b8353effaf530c6a8d6b52bc6ed6af namespace=k8s.io May 17 00:42:21.147626 env[1192]: time="2025-05-17T00:42:21.147608956Z" level=info msg="cleaning up dead shim" May 17 00:42:21.174775 env[1192]: time="2025-05-17T00:42:21.174720420Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:42:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2428 runtime=io.containerd.runc.v2\n" May 17 00:42:21.832964 env[1192]: time="2025-05-17T00:42:21.832900319Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:21.835500 env[1192]: time="2025-05-17T00:42:21.835455586Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:21.837553 env[1192]: time="2025-05-17T00:42:21.837494049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:21.838633 env[1192]: time="2025-05-17T00:42:21.838573911Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 17 00:42:21.844248 env[1192]: time="2025-05-17T00:42:21.844189364Z" level=info msg="CreateContainer within sandbox \"f9c5c7f47b350a7546de7f8cef7705e8cac5fb634737a414088b0c2887c7d30e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:42:21.859766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2155953951.mount: Deactivated successfully. May 17 00:42:21.870796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1165578394.mount: Deactivated successfully. May 17 00:42:21.874619 env[1192]: time="2025-05-17T00:42:21.874543657Z" level=info msg="CreateContainer within sandbox \"f9c5c7f47b350a7546de7f8cef7705e8cac5fb634737a414088b0c2887c7d30e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8\"" May 17 00:42:21.875440 env[1192]: time="2025-05-17T00:42:21.875391820Z" level=info msg="StartContainer for \"94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8\"" May 17 00:42:21.904813 systemd[1]: Started cri-containerd-94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8.scope. May 17 00:42:21.936812 kubelet[1882]: E0517 00:42:21.936278 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:21.948131 env[1192]: time="2025-05-17T00:42:21.948080760Z" level=info msg="CreateContainer within sandbox \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:42:21.969077 env[1192]: time="2025-05-17T00:42:21.969009371Z" level=info msg="StartContainer for \"94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8\" returns successfully" May 17 00:42:21.984118 env[1192]: time="2025-05-17T00:42:21.984055251Z" level=info msg="CreateContainer within sandbox \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8b2b3b8191c00d881a717658c0f8930dc5c0d3781e6e49b31b65be32a307ed20\"" May 17 00:42:21.985184 env[1192]: time="2025-05-17T00:42:21.985140309Z" level=info msg="StartContainer for \"8b2b3b8191c00d881a717658c0f8930dc5c0d3781e6e49b31b65be32a307ed20\"" May 17 00:42:22.009087 systemd[1]: Started cri-containerd-8b2b3b8191c00d881a717658c0f8930dc5c0d3781e6e49b31b65be32a307ed20.scope. May 17 00:42:22.056751 env[1192]: time="2025-05-17T00:42:22.056703268Z" level=info msg="StartContainer for \"8b2b3b8191c00d881a717658c0f8930dc5c0d3781e6e49b31b65be32a307ed20\" returns successfully" May 17 00:42:22.059795 systemd[1]: cri-containerd-8b2b3b8191c00d881a717658c0f8930dc5c0d3781e6e49b31b65be32a307ed20.scope: Deactivated successfully. May 17 00:42:22.121540 env[1192]: time="2025-05-17T00:42:22.121411690Z" level=info msg="shim disconnected" id=8b2b3b8191c00d881a717658c0f8930dc5c0d3781e6e49b31b65be32a307ed20 May 17 00:42:22.122257 env[1192]: time="2025-05-17T00:42:22.122225358Z" level=warning msg="cleaning up after shim disconnected" id=8b2b3b8191c00d881a717658c0f8930dc5c0d3781e6e49b31b65be32a307ed20 namespace=k8s.io May 17 00:42:22.122462 env[1192]: time="2025-05-17T00:42:22.122441944Z" level=info msg="cleaning up dead shim" May 17 00:42:22.136651 env[1192]: time="2025-05-17T00:42:22.136606396Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:42:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2523 runtime=io.containerd.runc.v2\n" May 17 00:42:22.969767 kubelet[1882]: E0517 00:42:22.969720 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:22.973416 env[1192]: time="2025-05-17T00:42:22.973372799Z" level=info msg="CreateContainer within sandbox \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:42:22.975954 kubelet[1882]: E0517 00:42:22.975919 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:22.995107 env[1192]: time="2025-05-17T00:42:22.995040866Z" level=info msg="CreateContainer within sandbox \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19\"" May 17 00:42:22.995820 env[1192]: time="2025-05-17T00:42:22.995770644Z" level=info msg="StartContainer for \"d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19\"" May 17 00:42:23.044078 systemd[1]: Started cri-containerd-d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19.scope. May 17 00:42:23.146079 env[1192]: time="2025-05-17T00:42:23.146009674Z" level=info msg="StartContainer for \"d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19\" returns successfully" May 17 00:42:23.287435 kubelet[1882]: I0517 00:42:23.287382 1882 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:42:23.317620 kubelet[1882]: I0517 00:42:23.317519 1882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-lj6tp" podStartSLOduration=2.428057874 podStartE2EDuration="13.317491681s" podCreationTimestamp="2025-05-17 00:42:10 +0000 UTC" firstStartedPulling="2025-05-17 00:42:10.950463746 +0000 UTC m=+5.402942468" lastFinishedPulling="2025-05-17 00:42:21.839897538 +0000 UTC m=+16.292376275" observedRunningTime="2025-05-17 00:42:23.08613874 +0000 UTC m=+17.538617484" watchObservedRunningTime="2025-05-17 00:42:23.317491681 +0000 UTC m=+17.769970427" May 17 00:42:23.326456 systemd[1]: Created slice kubepods-burstable-pod76d1f14d_80be_4e37_81b5_aa92161933b0.slice. May 17 00:42:23.339150 systemd[1]: Created slice kubepods-burstable-podd9249894_1203_4bc1_9c2f_bf4fec7a2b2d.slice. May 17 00:42:23.424382 kubelet[1882]: I0517 00:42:23.424312 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76d1f14d-80be-4e37-81b5-aa92161933b0-config-volume\") pod \"coredns-668d6bf9bc-pg6bd\" (UID: \"76d1f14d-80be-4e37-81b5-aa92161933b0\") " pod="kube-system/coredns-668d6bf9bc-pg6bd" May 17 00:42:23.424382 kubelet[1882]: I0517 00:42:23.424364 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9249894-1203-4bc1-9c2f-bf4fec7a2b2d-config-volume\") pod \"coredns-668d6bf9bc-spr87\" (UID: \"d9249894-1203-4bc1-9c2f-bf4fec7a2b2d\") " pod="kube-system/coredns-668d6bf9bc-spr87" May 17 00:42:23.424597 kubelet[1882]: I0517 00:42:23.424395 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqsr7\" (UniqueName: \"kubernetes.io/projected/d9249894-1203-4bc1-9c2f-bf4fec7a2b2d-kube-api-access-lqsr7\") pod \"coredns-668d6bf9bc-spr87\" (UID: \"d9249894-1203-4bc1-9c2f-bf4fec7a2b2d\") " pod="kube-system/coredns-668d6bf9bc-spr87" May 17 00:42:23.424597 kubelet[1882]: I0517 00:42:23.424412 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5l5p\" (UniqueName: \"kubernetes.io/projected/76d1f14d-80be-4e37-81b5-aa92161933b0-kube-api-access-d5l5p\") pod \"coredns-668d6bf9bc-pg6bd\" (UID: \"76d1f14d-80be-4e37-81b5-aa92161933b0\") " pod="kube-system/coredns-668d6bf9bc-pg6bd" May 17 00:42:23.630857 kubelet[1882]: E0517 00:42:23.630031 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:23.631094 env[1192]: time="2025-05-17T00:42:23.631053454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pg6bd,Uid:76d1f14d-80be-4e37-81b5-aa92161933b0,Namespace:kube-system,Attempt:0,}" May 17 00:42:23.643203 kubelet[1882]: E0517 00:42:23.643155 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:23.657210 env[1192]: time="2025-05-17T00:42:23.657164484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-spr87,Uid:d9249894-1203-4bc1-9c2f-bf4fec7a2b2d,Namespace:kube-system,Attempt:0,}" May 17 00:42:23.726749 systemd[1]: run-containerd-runc-k8s.io-d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19-runc.VGshgZ.mount: Deactivated successfully. May 17 00:42:23.979197 kubelet[1882]: E0517 00:42:23.978705 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:23.979967 kubelet[1882]: E0517 00:42:23.979936 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:24.012022 kubelet[1882]: I0517 00:42:24.011954 1882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6cqzn" podStartSLOduration=4.93584137 podStartE2EDuration="14.011934354s" podCreationTimestamp="2025-05-17 00:42:10 +0000 UTC" firstStartedPulling="2025-05-17 00:42:10.596331061 +0000 UTC m=+5.048809794" lastFinishedPulling="2025-05-17 00:42:19.672424057 +0000 UTC m=+14.124902778" observedRunningTime="2025-05-17 00:42:24.008198007 +0000 UTC m=+18.460676752" watchObservedRunningTime="2025-05-17 00:42:24.011934354 +0000 UTC m=+18.464413210" May 17 00:42:24.981493 kubelet[1882]: E0517 00:42:24.981454 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:25.616632 systemd-networkd[1003]: cilium_host: Link UP May 17 00:42:25.616822 systemd-networkd[1003]: cilium_net: Link UP May 17 00:42:25.617765 systemd-networkd[1003]: cilium_net: Gained carrier May 17 00:42:25.618005 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 17 00:42:25.618059 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 17 00:42:25.618322 systemd-networkd[1003]: cilium_host: Gained carrier May 17 00:42:25.763219 systemd-networkd[1003]: cilium_vxlan: Link UP May 17 00:42:25.763233 systemd-networkd[1003]: cilium_vxlan: Gained carrier May 17 00:42:25.983788 kubelet[1882]: E0517 00:42:25.983652 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:26.265911 kernel: NET: Registered PF_ALG protocol family May 17 00:42:26.314325 systemd-networkd[1003]: cilium_net: Gained IPv6LL May 17 00:42:26.314767 systemd-networkd[1003]: cilium_host: Gained IPv6LL May 17 00:42:26.954059 systemd-networkd[1003]: cilium_vxlan: Gained IPv6LL May 17 00:42:27.227113 systemd-networkd[1003]: lxc_health: Link UP May 17 00:42:27.233903 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:42:27.233920 systemd-networkd[1003]: lxc_health: Gained carrier May 17 00:42:27.689833 systemd-networkd[1003]: lxcf3658e59230f: Link UP May 17 00:42:27.693925 kernel: eth0: renamed from tmp34664 May 17 00:42:27.701293 systemd-networkd[1003]: lxcf3658e59230f: Gained carrier May 17 00:42:27.701896 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf3658e59230f: link becomes ready May 17 00:42:27.745949 systemd-networkd[1003]: lxc23ffb52b0852: Link UP May 17 00:42:27.751905 kernel: eth0: renamed from tmpb4dc8 May 17 00:42:27.755968 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc23ffb52b0852: link becomes ready May 17 00:42:27.755908 systemd-networkd[1003]: lxc23ffb52b0852: Gained carrier May 17 00:42:28.433521 kubelet[1882]: E0517 00:42:28.433469 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:28.809129 systemd-networkd[1003]: lxc_health: Gained IPv6LL May 17 00:42:29.129134 systemd-networkd[1003]: lxc23ffb52b0852: Gained IPv6LL May 17 00:42:29.705125 systemd-networkd[1003]: lxcf3658e59230f: Gained IPv6LL May 17 00:42:30.848852 kubelet[1882]: I0517 00:42:30.848802 1882 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:42:30.849587 kubelet[1882]: E0517 00:42:30.849551 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:30.993723 kubelet[1882]: E0517 00:42:30.993679 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:32.965360 env[1192]: time="2025-05-17T00:42:32.965244750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:32.965360 env[1192]: time="2025-05-17T00:42:32.965296281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:32.965360 env[1192]: time="2025-05-17T00:42:32.965306392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:32.966795 env[1192]: time="2025-05-17T00:42:32.966704151Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/346643b3a996a9f9f0e4c63231e07aa470a881d9274ad9f598ae606a919aa787 pid=3066 runtime=io.containerd.runc.v2 May 17 00:42:33.017313 systemd[1]: run-containerd-runc-k8s.io-346643b3a996a9f9f0e4c63231e07aa470a881d9274ad9f598ae606a919aa787-runc.Xl7xjb.mount: Deactivated successfully. May 17 00:42:33.026067 systemd[1]: Started cri-containerd-346643b3a996a9f9f0e4c63231e07aa470a881d9274ad9f598ae606a919aa787.scope. May 17 00:42:33.048911 env[1192]: time="2025-05-17T00:42:33.047243944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:33.048911 env[1192]: time="2025-05-17T00:42:33.047391060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:33.048911 env[1192]: time="2025-05-17T00:42:33.047431080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:33.049483 env[1192]: time="2025-05-17T00:42:33.049357524Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4dc802866329c95c37d088ed35e3db2effbbcc62cef331b2dbcadb77857e5be pid=3102 runtime=io.containerd.runc.v2 May 17 00:42:33.084487 systemd[1]: Started cri-containerd-b4dc802866329c95c37d088ed35e3db2effbbcc62cef331b2dbcadb77857e5be.scope. May 17 00:42:33.119128 env[1192]: time="2025-05-17T00:42:33.119061393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pg6bd,Uid:76d1f14d-80be-4e37-81b5-aa92161933b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"346643b3a996a9f9f0e4c63231e07aa470a881d9274ad9f598ae606a919aa787\"" May 17 00:42:33.120421 kubelet[1882]: E0517 00:42:33.120391 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:33.123188 env[1192]: time="2025-05-17T00:42:33.123146510Z" level=info msg="CreateContainer within sandbox \"346643b3a996a9f9f0e4c63231e07aa470a881d9274ad9f598ae606a919aa787\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:42:33.139593 env[1192]: time="2025-05-17T00:42:33.139529663Z" level=info msg="CreateContainer within sandbox \"346643b3a996a9f9f0e4c63231e07aa470a881d9274ad9f598ae606a919aa787\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f3bd7a9b43063ed06da0944fc8f7b8c6453086ca6a77881523cfe715397855f7\"" May 17 00:42:33.140349 env[1192]: time="2025-05-17T00:42:33.140308685Z" level=info msg="StartContainer for \"f3bd7a9b43063ed06da0944fc8f7b8c6453086ca6a77881523cfe715397855f7\"" May 17 00:42:33.162900 systemd[1]: Started cri-containerd-f3bd7a9b43063ed06da0944fc8f7b8c6453086ca6a77881523cfe715397855f7.scope. May 17 00:42:33.203668 env[1192]: time="2025-05-17T00:42:33.203606624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-spr87,Uid:d9249894-1203-4bc1-9c2f-bf4fec7a2b2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4dc802866329c95c37d088ed35e3db2effbbcc62cef331b2dbcadb77857e5be\"" May 17 00:42:33.204744 kubelet[1882]: E0517 00:42:33.204664 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:33.208265 env[1192]: time="2025-05-17T00:42:33.208199146Z" level=info msg="CreateContainer within sandbox \"b4dc802866329c95c37d088ed35e3db2effbbcc62cef331b2dbcadb77857e5be\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:42:33.230769 env[1192]: time="2025-05-17T00:42:33.230666724Z" level=info msg="CreateContainer within sandbox \"b4dc802866329c95c37d088ed35e3db2effbbcc62cef331b2dbcadb77857e5be\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c307de3b86067084d4a20a86437f958ce73d7aa2aeca8f29c6aaa23dd1f643f3\"" May 17 00:42:33.233561 env[1192]: time="2025-05-17T00:42:33.233509536Z" level=info msg="StartContainer for \"c307de3b86067084d4a20a86437f958ce73d7aa2aeca8f29c6aaa23dd1f643f3\"" May 17 00:42:33.247133 env[1192]: time="2025-05-17T00:42:33.246856531Z" level=info msg="StartContainer for \"f3bd7a9b43063ed06da0944fc8f7b8c6453086ca6a77881523cfe715397855f7\" returns successfully" May 17 00:42:33.266128 systemd[1]: Started cri-containerd-c307de3b86067084d4a20a86437f958ce73d7aa2aeca8f29c6aaa23dd1f643f3.scope. May 17 00:42:33.318747 env[1192]: time="2025-05-17T00:42:33.318698865Z" level=info msg="StartContainer for \"c307de3b86067084d4a20a86437f958ce73d7aa2aeca8f29c6aaa23dd1f643f3\" returns successfully" May 17 00:42:34.002293 kubelet[1882]: E0517 00:42:34.002259 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:34.006737 kubelet[1882]: E0517 00:42:34.006657 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:34.022681 kubelet[1882]: I0517 00:42:34.022612 1882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-spr87" podStartSLOduration=24.022592121 podStartE2EDuration="24.022592121s" podCreationTimestamp="2025-05-17 00:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:34.020279045 +0000 UTC m=+28.472757787" watchObservedRunningTime="2025-05-17 00:42:34.022592121 +0000 UTC m=+28.475070864" May 17 00:42:34.062176 kubelet[1882]: I0517 00:42:34.062102 1882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pg6bd" podStartSLOduration=24.06208065 podStartE2EDuration="24.06208065s" podCreationTimestamp="2025-05-17 00:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:34.035798753 +0000 UTC m=+28.488277494" watchObservedRunningTime="2025-05-17 00:42:34.06208065 +0000 UTC m=+28.514559398" May 17 00:42:35.008102 kubelet[1882]: E0517 00:42:35.008068 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:35.009482 kubelet[1882]: E0517 00:42:35.009457 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:36.010375 kubelet[1882]: E0517 00:42:36.010312 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:36.010375 kubelet[1882]: E0517 00:42:36.010311 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:50.159747 systemd[1]: Started sshd@5-64.23.148.252:22-147.75.109.163:56326.service. May 17 00:42:50.240417 sshd[3225]: Accepted publickey for core from 147.75.109.163 port 56326 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:42:50.242612 sshd[3225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:50.249934 systemd-logind[1181]: New session 6 of user core. May 17 00:42:50.250073 systemd[1]: Started session-6.scope. May 17 00:42:50.463703 sshd[3225]: pam_unix(sshd:session): session closed for user core May 17 00:42:50.468032 systemd[1]: sshd@5-64.23.148.252:22-147.75.109.163:56326.service: Deactivated successfully. May 17 00:42:50.469183 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:42:50.471062 systemd-logind[1181]: Session 6 logged out. Waiting for processes to exit. May 17 00:42:50.473492 systemd-logind[1181]: Removed session 6. May 17 00:42:55.472070 systemd[1]: Started sshd@6-64.23.148.252:22-147.75.109.163:56328.service. May 17 00:42:55.527816 sshd[3238]: Accepted publickey for core from 147.75.109.163 port 56328 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:42:55.530333 sshd[3238]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:55.535810 systemd-logind[1181]: New session 7 of user core. May 17 00:42:55.536570 systemd[1]: Started session-7.scope. May 17 00:42:55.680135 sshd[3238]: pam_unix(sshd:session): session closed for user core May 17 00:42:55.683799 systemd-logind[1181]: Session 7 logged out. Waiting for processes to exit. May 17 00:42:55.684190 systemd[1]: sshd@6-64.23.148.252:22-147.75.109.163:56328.service: Deactivated successfully. May 17 00:42:55.685299 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:42:55.686427 systemd-logind[1181]: Removed session 7. May 17 00:43:00.688231 systemd[1]: Started sshd@7-64.23.148.252:22-147.75.109.163:41866.service. May 17 00:43:00.749580 sshd[3251]: Accepted publickey for core from 147.75.109.163 port 41866 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:00.752307 sshd[3251]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:00.758868 systemd-logind[1181]: New session 8 of user core. May 17 00:43:00.759088 systemd[1]: Started session-8.scope. May 17 00:43:00.938450 sshd[3251]: pam_unix(sshd:session): session closed for user core May 17 00:43:00.943045 systemd-logind[1181]: Session 8 logged out. Waiting for processes to exit. May 17 00:43:00.943191 systemd[1]: sshd@7-64.23.148.252:22-147.75.109.163:41866.service: Deactivated successfully. May 17 00:43:00.944086 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:43:00.945491 systemd-logind[1181]: Removed session 8. May 17 00:43:05.945050 systemd[1]: Started sshd@8-64.23.148.252:22-147.75.109.163:41872.service. May 17 00:43:06.011729 sshd[3268]: Accepted publickey for core from 147.75.109.163 port 41872 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:06.014515 sshd[3268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:06.020584 systemd-logind[1181]: New session 9 of user core. May 17 00:43:06.021490 systemd[1]: Started session-9.scope. May 17 00:43:06.197628 sshd[3268]: pam_unix(sshd:session): session closed for user core May 17 00:43:06.203748 systemd[1]: sshd@8-64.23.148.252:22-147.75.109.163:41872.service: Deactivated successfully. May 17 00:43:06.204632 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:43:06.205441 systemd-logind[1181]: Session 9 logged out. Waiting for processes to exit. May 17 00:43:06.206639 systemd-logind[1181]: Removed session 9. May 17 00:43:11.203258 systemd[1]: Started sshd@9-64.23.148.252:22-147.75.109.163:60154.service. May 17 00:43:11.257088 sshd[3284]: Accepted publickey for core from 147.75.109.163 port 60154 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:11.259624 sshd[3284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:11.265216 systemd[1]: Started session-10.scope. May 17 00:43:11.266006 systemd-logind[1181]: New session 10 of user core. May 17 00:43:11.407188 sshd[3284]: pam_unix(sshd:session): session closed for user core May 17 00:43:11.412933 systemd[1]: sshd@9-64.23.148.252:22-147.75.109.163:60154.service: Deactivated successfully. May 17 00:43:11.414143 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:43:11.415214 systemd-logind[1181]: Session 10 logged out. Waiting for processes to exit. May 17 00:43:11.417718 systemd[1]: Started sshd@10-64.23.148.252:22-147.75.109.163:60158.service. May 17 00:43:11.421637 systemd-logind[1181]: Removed session 10. May 17 00:43:11.474932 sshd[3297]: Accepted publickey for core from 147.75.109.163 port 60158 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:11.476205 sshd[3297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:11.482925 systemd-logind[1181]: New session 11 of user core. May 17 00:43:11.483757 systemd[1]: Started session-11.scope. May 17 00:43:11.709964 systemd[1]: Started sshd@11-64.23.148.252:22-147.75.109.163:60160.service. May 17 00:43:11.713965 sshd[3297]: pam_unix(sshd:session): session closed for user core May 17 00:43:11.724523 systemd[1]: sshd@10-64.23.148.252:22-147.75.109.163:60158.service: Deactivated successfully. May 17 00:43:11.725473 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:43:11.727042 systemd-logind[1181]: Session 11 logged out. Waiting for processes to exit. May 17 00:43:11.729000 systemd-logind[1181]: Removed session 11. May 17 00:43:11.776391 sshd[3305]: Accepted publickey for core from 147.75.109.163 port 60160 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:11.778745 sshd[3305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:11.784003 systemd-logind[1181]: New session 12 of user core. May 17 00:43:11.785061 systemd[1]: Started session-12.scope. May 17 00:43:11.970925 sshd[3305]: pam_unix(sshd:session): session closed for user core May 17 00:43:11.975141 systemd[1]: sshd@11-64.23.148.252:22-147.75.109.163:60160.service: Deactivated successfully. May 17 00:43:11.976283 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:43:11.977338 systemd-logind[1181]: Session 12 logged out. Waiting for processes to exit. May 17 00:43:11.978724 systemd-logind[1181]: Removed session 12. May 17 00:43:16.980235 systemd[1]: Started sshd@12-64.23.148.252:22-147.75.109.163:60164.service. May 17 00:43:17.033656 sshd[3318]: Accepted publickey for core from 147.75.109.163 port 60164 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:17.036019 sshd[3318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:17.041549 systemd-logind[1181]: New session 13 of user core. May 17 00:43:17.042679 systemd[1]: Started session-13.scope. May 17 00:43:17.184483 sshd[3318]: pam_unix(sshd:session): session closed for user core May 17 00:43:17.188584 systemd[1]: sshd@12-64.23.148.252:22-147.75.109.163:60164.service: Deactivated successfully. May 17 00:43:17.189029 systemd-logind[1181]: Session 13 logged out. Waiting for processes to exit. May 17 00:43:17.189469 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:43:17.191137 systemd-logind[1181]: Removed session 13. May 17 00:43:19.798567 kubelet[1882]: E0517 00:43:19.798479 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:43:22.191763 systemd[1]: Started sshd@13-64.23.148.252:22-147.75.109.163:37438.service. May 17 00:43:22.247060 sshd[3330]: Accepted publickey for core from 147.75.109.163 port 37438 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:22.250284 sshd[3330]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:22.255585 systemd-logind[1181]: New session 14 of user core. May 17 00:43:22.256484 systemd[1]: Started session-14.scope. May 17 00:43:22.394708 sshd[3330]: pam_unix(sshd:session): session closed for user core May 17 00:43:22.398388 systemd-logind[1181]: Session 14 logged out. Waiting for processes to exit. May 17 00:43:22.398941 systemd[1]: sshd@13-64.23.148.252:22-147.75.109.163:37438.service: Deactivated successfully. May 17 00:43:22.400144 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:43:22.401520 systemd-logind[1181]: Removed session 14. May 17 00:43:27.403090 systemd[1]: Started sshd@14-64.23.148.252:22-147.75.109.163:37452.service. May 17 00:43:27.459645 sshd[3343]: Accepted publickey for core from 147.75.109.163 port 37452 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:27.460489 sshd[3343]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:27.466932 systemd[1]: Started session-15.scope. May 17 00:43:27.467386 systemd-logind[1181]: New session 15 of user core. May 17 00:43:27.608076 sshd[3343]: pam_unix(sshd:session): session closed for user core May 17 00:43:27.612386 systemd[1]: sshd@14-64.23.148.252:22-147.75.109.163:37452.service: Deactivated successfully. May 17 00:43:27.613506 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:43:27.614475 systemd-logind[1181]: Session 15 logged out. Waiting for processes to exit. May 17 00:43:27.616648 systemd[1]: Started sshd@15-64.23.148.252:22-147.75.109.163:37456.service. May 17 00:43:27.620572 systemd-logind[1181]: Removed session 15. May 17 00:43:27.677307 sshd[3355]: Accepted publickey for core from 147.75.109.163 port 37456 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:27.678551 sshd[3355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:27.683928 systemd-logind[1181]: New session 16 of user core. May 17 00:43:27.684825 systemd[1]: Started session-16.scope. May 17 00:43:28.049585 sshd[3355]: pam_unix(sshd:session): session closed for user core May 17 00:43:28.057205 systemd[1]: Started sshd@16-64.23.148.252:22-147.75.109.163:35352.service. May 17 00:43:28.066027 systemd[1]: sshd@15-64.23.148.252:22-147.75.109.163:37456.service: Deactivated successfully. May 17 00:43:28.067247 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:43:28.068247 systemd-logind[1181]: Session 16 logged out. Waiting for processes to exit. May 17 00:43:28.069573 systemd-logind[1181]: Removed session 16. May 17 00:43:28.120090 sshd[3364]: Accepted publickey for core from 147.75.109.163 port 35352 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:28.122052 sshd[3364]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:28.128314 systemd-logind[1181]: New session 17 of user core. May 17 00:43:28.128810 systemd[1]: Started session-17.scope. May 17 00:43:29.007502 sshd[3364]: pam_unix(sshd:session): session closed for user core May 17 00:43:29.013312 systemd[1]: sshd@16-64.23.148.252:22-147.75.109.163:35352.service: Deactivated successfully. May 17 00:43:29.014424 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:43:29.017363 systemd-logind[1181]: Session 17 logged out. Waiting for processes to exit. May 17 00:43:29.023024 systemd[1]: Started sshd@17-64.23.148.252:22-147.75.109.163:35354.service. May 17 00:43:29.027363 systemd-logind[1181]: Removed session 17. May 17 00:43:29.079467 sshd[3383]: Accepted publickey for core from 147.75.109.163 port 35354 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:29.081393 sshd[3383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:29.087929 systemd-logind[1181]: New session 18 of user core. May 17 00:43:29.089060 systemd[1]: Started session-18.scope. May 17 00:43:29.430825 sshd[3383]: pam_unix(sshd:session): session closed for user core May 17 00:43:29.446295 systemd[1]: Started sshd@18-64.23.148.252:22-147.75.109.163:35358.service. May 17 00:43:29.447318 systemd[1]: sshd@17-64.23.148.252:22-147.75.109.163:35354.service: Deactivated successfully. May 17 00:43:29.449944 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:43:29.454319 systemd-logind[1181]: Session 18 logged out. Waiting for processes to exit. May 17 00:43:29.456774 systemd-logind[1181]: Removed session 18. May 17 00:43:29.509199 sshd[3394]: Accepted publickey for core from 147.75.109.163 port 35358 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:29.511108 sshd[3394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:29.518005 systemd[1]: Started session-19.scope. May 17 00:43:29.518739 systemd-logind[1181]: New session 19 of user core. May 17 00:43:29.660157 sshd[3394]: pam_unix(sshd:session): session closed for user core May 17 00:43:29.663171 systemd[1]: sshd@18-64.23.148.252:22-147.75.109.163:35358.service: Deactivated successfully. May 17 00:43:29.663960 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:43:29.665195 systemd-logind[1181]: Session 19 logged out. Waiting for processes to exit. May 17 00:43:29.666231 systemd-logind[1181]: Removed session 19. May 17 00:43:29.797789 kubelet[1882]: E0517 00:43:29.797326 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:43:32.797674 kubelet[1882]: E0517 00:43:32.797638 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:43:34.670506 systemd[1]: Started sshd@19-64.23.148.252:22-147.75.109.163:35370.service. May 17 00:43:34.728061 sshd[3406]: Accepted publickey for core from 147.75.109.163 port 35370 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:34.730670 sshd[3406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:34.737623 systemd[1]: Started session-20.scope. May 17 00:43:34.738202 systemd-logind[1181]: New session 20 of user core. May 17 00:43:34.885590 sshd[3406]: pam_unix(sshd:session): session closed for user core May 17 00:43:34.889018 systemd[1]: sshd@19-64.23.148.252:22-147.75.109.163:35370.service: Deactivated successfully. May 17 00:43:34.889819 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:43:34.890842 systemd-logind[1181]: Session 20 logged out. Waiting for processes to exit. May 17 00:43:34.892002 systemd-logind[1181]: Removed session 20. May 17 00:43:36.797979 kubelet[1882]: E0517 00:43:36.797936 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:43:38.797751 kubelet[1882]: E0517 00:43:38.797701 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:43:39.894056 systemd[1]: Started sshd@20-64.23.148.252:22-147.75.109.163:39126.service. May 17 00:43:39.950031 sshd[3420]: Accepted publickey for core from 147.75.109.163 port 39126 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:39.953174 sshd[3420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:39.961040 systemd-logind[1181]: New session 21 of user core. May 17 00:43:39.961186 systemd[1]: Started session-21.scope. May 17 00:43:40.111172 sshd[3420]: pam_unix(sshd:session): session closed for user core May 17 00:43:40.115792 systemd-logind[1181]: Session 21 logged out. Waiting for processes to exit. May 17 00:43:40.116530 systemd[1]: sshd@20-64.23.148.252:22-147.75.109.163:39126.service: Deactivated successfully. May 17 00:43:40.117756 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:43:40.119168 systemd-logind[1181]: Removed session 21. May 17 00:43:40.798027 kubelet[1882]: E0517 00:43:40.797980 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:43:40.798439 kubelet[1882]: E0517 00:43:40.798371 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:43:43.798744 kubelet[1882]: E0517 00:43:43.798670 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:43:45.118997 systemd[1]: Started sshd@21-64.23.148.252:22-147.75.109.163:39134.service. May 17 00:43:45.176464 sshd[3434]: Accepted publickey for core from 147.75.109.163 port 39134 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:45.179513 sshd[3434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:45.186698 systemd[1]: Started session-22.scope. May 17 00:43:45.187970 systemd-logind[1181]: New session 22 of user core. May 17 00:43:45.329639 sshd[3434]: pam_unix(sshd:session): session closed for user core May 17 00:43:45.337373 systemd[1]: sshd@21-64.23.148.252:22-147.75.109.163:39134.service: Deactivated successfully. May 17 00:43:45.338669 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:43:45.339718 systemd-logind[1181]: Session 22 logged out. Waiting for processes to exit. May 17 00:43:45.340956 systemd-logind[1181]: Removed session 22. May 17 00:43:50.336716 systemd[1]: Started sshd@22-64.23.148.252:22-147.75.109.163:49240.service. May 17 00:43:50.389855 sshd[3446]: Accepted publickey for core from 147.75.109.163 port 49240 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:50.392560 sshd[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:50.398681 systemd[1]: Started session-23.scope. May 17 00:43:50.399407 systemd-logind[1181]: New session 23 of user core. May 17 00:43:50.541676 sshd[3446]: pam_unix(sshd:session): session closed for user core May 17 00:43:50.546064 systemd[1]: sshd@22-64.23.148.252:22-147.75.109.163:49240.service: Deactivated successfully. May 17 00:43:50.547075 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:43:50.548517 systemd-logind[1181]: Session 23 logged out. Waiting for processes to exit. May 17 00:43:50.549910 systemd-logind[1181]: Removed session 23. May 17 00:43:55.552435 systemd[1]: Started sshd@23-64.23.148.252:22-147.75.109.163:49242.service. May 17 00:43:55.604979 sshd[3458]: Accepted publickey for core from 147.75.109.163 port 49242 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:55.607184 sshd[3458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:55.612731 systemd[1]: Started session-24.scope. May 17 00:43:55.613818 systemd-logind[1181]: New session 24 of user core. May 17 00:43:55.764065 sshd[3458]: pam_unix(sshd:session): session closed for user core May 17 00:43:55.767962 systemd[1]: sshd@23-64.23.148.252:22-147.75.109.163:49242.service: Deactivated successfully. May 17 00:43:55.769262 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:43:55.770653 systemd-logind[1181]: Session 24 logged out. Waiting for processes to exit. May 17 00:43:55.772686 systemd-logind[1181]: Removed session 24. May 17 00:44:00.773291 systemd[1]: Started sshd@24-64.23.148.252:22-147.75.109.163:34586.service. May 17 00:44:00.834615 sshd[3471]: Accepted publickey for core from 147.75.109.163 port 34586 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:44:00.837144 sshd[3471]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:00.845663 systemd[1]: Started session-25.scope. May 17 00:44:00.847011 systemd-logind[1181]: New session 25 of user core. May 17 00:44:00.998595 sshd[3471]: pam_unix(sshd:session): session closed for user core May 17 00:44:01.009494 systemd[1]: Started sshd@25-64.23.148.252:22-147.75.109.163:34596.service. May 17 00:44:01.010725 systemd[1]: sshd@24-64.23.148.252:22-147.75.109.163:34586.service: Deactivated successfully. May 17 00:44:01.012702 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:44:01.018620 systemd-logind[1181]: Session 25 logged out. Waiting for processes to exit. May 17 00:44:01.020632 systemd-logind[1181]: Removed session 25. May 17 00:44:01.080325 sshd[3482]: Accepted publickey for core from 147.75.109.163 port 34596 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:44:01.083189 sshd[3482]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:01.092069 systemd[1]: Started session-26.scope. May 17 00:44:01.093298 systemd-logind[1181]: New session 26 of user core. May 17 00:44:02.796876 systemd[1]: run-containerd-runc-k8s.io-d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19-runc.vOEMcU.mount: Deactivated successfully. May 17 00:44:02.847128 env[1192]: time="2025-05-17T00:44:02.847077125Z" level=info msg="StopContainer for \"94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8\" with timeout 30 (s)" May 17 00:44:02.848157 env[1192]: time="2025-05-17T00:44:02.848121878Z" level=info msg="Stop container \"94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8\" with signal terminated" May 17 00:44:02.862287 systemd[1]: cri-containerd-94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8.scope: Deactivated successfully. May 17 00:44:02.864050 env[1192]: time="2025-05-17T00:44:02.863980673Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:44:02.875449 env[1192]: time="2025-05-17T00:44:02.875411136Z" level=info msg="StopContainer for \"d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19\" with timeout 2 (s)" May 17 00:44:02.876103 env[1192]: time="2025-05-17T00:44:02.876002592Z" level=info msg="Stop container \"d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19\" with signal terminated" May 17 00:44:02.893983 systemd-networkd[1003]: lxc_health: Link DOWN May 17 00:44:02.893994 systemd-networkd[1003]: lxc_health: Lost carrier May 17 00:44:02.930147 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8-rootfs.mount: Deactivated successfully. May 17 00:44:02.933229 systemd[1]: cri-containerd-d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19.scope: Deactivated successfully. May 17 00:44:02.933566 systemd[1]: cri-containerd-d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19.scope: Consumed 9.114s CPU time. May 17 00:44:02.944097 env[1192]: time="2025-05-17T00:44:02.944030372Z" level=info msg="shim disconnected" id=94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8 May 17 00:44:02.944097 env[1192]: time="2025-05-17T00:44:02.944089972Z" level=warning msg="cleaning up after shim disconnected" id=94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8 namespace=k8s.io May 17 00:44:02.944097 env[1192]: time="2025-05-17T00:44:02.944103577Z" level=info msg="cleaning up dead shim" May 17 00:44:02.968579 env[1192]: time="2025-05-17T00:44:02.968481028Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3536 runtime=io.containerd.runc.v2\n" May 17 00:44:02.972700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19-rootfs.mount: Deactivated successfully. May 17 00:44:02.976048 env[1192]: time="2025-05-17T00:44:02.975982406Z" level=info msg="StopContainer for \"94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8\" returns successfully" May 17 00:44:02.981021 env[1192]: time="2025-05-17T00:44:02.980966788Z" level=info msg="StopPodSandbox for \"f9c5c7f47b350a7546de7f8cef7705e8cac5fb634737a414088b0c2887c7d30e\"" May 17 00:44:02.981541 env[1192]: time="2025-05-17T00:44:02.981500051Z" level=info msg="Container to stop \"94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:44:02.985510 env[1192]: time="2025-05-17T00:44:02.985450529Z" level=info msg="shim disconnected" id=d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19 May 17 00:44:02.985510 env[1192]: time="2025-05-17T00:44:02.985508321Z" level=warning msg="cleaning up after shim disconnected" id=d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19 namespace=k8s.io May 17 00:44:02.985714 env[1192]: time="2025-05-17T00:44:02.985522252Z" level=info msg="cleaning up dead shim" May 17 00:44:02.999083 env[1192]: time="2025-05-17T00:44:02.999033027Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3561 runtime=io.containerd.runc.v2\n" May 17 00:44:03.000635 systemd[1]: cri-containerd-f9c5c7f47b350a7546de7f8cef7705e8cac5fb634737a414088b0c2887c7d30e.scope: Deactivated successfully. May 17 00:44:03.001666 env[1192]: time="2025-05-17T00:44:03.001609993Z" level=info msg="StopContainer for \"d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19\" returns successfully" May 17 00:44:03.002766 env[1192]: time="2025-05-17T00:44:03.002719223Z" level=info msg="StopPodSandbox for \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\"" May 17 00:44:03.003114 env[1192]: time="2025-05-17T00:44:03.003076592Z" level=info msg="Container to stop \"d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:44:03.003229 env[1192]: time="2025-05-17T00:44:03.003211524Z" level=info msg="Container to stop \"d5b9bec9a44f3d3cc71674acc85a5262292188a4e787bf40813083a9f53dd2d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:44:03.003387 env[1192]: time="2025-05-17T00:44:03.003342788Z" level=info msg="Container to stop \"65dcf16705435ef03b5fd4cfc274df01f66944a4b1c7f5f4886997433cb28265\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:44:03.003551 env[1192]: time="2025-05-17T00:44:03.003530003Z" level=info msg="Container to stop \"bdb1b5d191c1a04c3161f152a98084a309b8353effaf530c6a8d6b52bc6ed6af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:44:03.003718 env[1192]: time="2025-05-17T00:44:03.003699006Z" level=info msg="Container to stop \"8b2b3b8191c00d881a717658c0f8930dc5c0d3781e6e49b31b65be32a307ed20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:44:03.015478 systemd[1]: cri-containerd-371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965.scope: Deactivated successfully. May 17 00:44:03.045141 env[1192]: time="2025-05-17T00:44:03.045079368Z" level=info msg="shim disconnected" id=f9c5c7f47b350a7546de7f8cef7705e8cac5fb634737a414088b0c2887c7d30e May 17 00:44:03.045141 env[1192]: time="2025-05-17T00:44:03.045139781Z" level=warning msg="cleaning up after shim disconnected" id=f9c5c7f47b350a7546de7f8cef7705e8cac5fb634737a414088b0c2887c7d30e namespace=k8s.io May 17 00:44:03.045141 env[1192]: time="2025-05-17T00:44:03.045152959Z" level=info msg="cleaning up dead shim" May 17 00:44:03.057824 env[1192]: time="2025-05-17T00:44:03.056296319Z" level=info msg="shim disconnected" id=371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965 May 17 00:44:03.058969 env[1192]: time="2025-05-17T00:44:03.058919545Z" level=warning msg="cleaning up after shim disconnected" id=371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965 namespace=k8s.io May 17 00:44:03.059158 env[1192]: time="2025-05-17T00:44:03.059138191Z" level=info msg="cleaning up dead shim" May 17 00:44:03.073327 env[1192]: time="2025-05-17T00:44:03.073264687Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3610 runtime=io.containerd.runc.v2\n" May 17 00:44:03.073778 env[1192]: time="2025-05-17T00:44:03.073709024Z" level=info msg="TearDown network for sandbox \"f9c5c7f47b350a7546de7f8cef7705e8cac5fb634737a414088b0c2887c7d30e\" successfully" May 17 00:44:03.073831 env[1192]: time="2025-05-17T00:44:03.073775260Z" level=info msg="StopPodSandbox for \"f9c5c7f47b350a7546de7f8cef7705e8cac5fb634737a414088b0c2887c7d30e\" returns successfully" May 17 00:44:03.078702 env[1192]: time="2025-05-17T00:44:03.078478128Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3617 runtime=io.containerd.runc.v2\n" May 17 00:44:03.080359 env[1192]: time="2025-05-17T00:44:03.080283439Z" level=info msg="TearDown network for sandbox \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\" successfully" May 17 00:44:03.080591 env[1192]: time="2025-05-17T00:44:03.080555197Z" level=info msg="StopPodSandbox for \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\" returns successfully" May 17 00:44:03.216716 kubelet[1882]: I0517 00:44:03.216538 1882 scope.go:117] "RemoveContainer" containerID="d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19" May 17 00:44:03.219912 env[1192]: time="2025-05-17T00:44:03.219727799Z" level=info msg="RemoveContainer for \"d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19\"" May 17 00:44:03.224294 env[1192]: time="2025-05-17T00:44:03.224233923Z" level=info msg="RemoveContainer for \"d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19\" returns successfully" May 17 00:44:03.224988 kubelet[1882]: I0517 00:44:03.224780 1882 scope.go:117] "RemoveContainer" containerID="8b2b3b8191c00d881a717658c0f8930dc5c0d3781e6e49b31b65be32a307ed20" May 17 00:44:03.226533 env[1192]: time="2025-05-17T00:44:03.226488161Z" level=info msg="RemoveContainer for \"8b2b3b8191c00d881a717658c0f8930dc5c0d3781e6e49b31b65be32a307ed20\"" May 17 00:44:03.229675 env[1192]: time="2025-05-17T00:44:03.229626231Z" level=info msg="RemoveContainer for \"8b2b3b8191c00d881a717658c0f8930dc5c0d3781e6e49b31b65be32a307ed20\" returns successfully" May 17 00:44:03.230242 kubelet[1882]: I0517 00:44:03.230211 1882 scope.go:117] "RemoveContainer" containerID="bdb1b5d191c1a04c3161f152a98084a309b8353effaf530c6a8d6b52bc6ed6af" May 17 00:44:03.231956 env[1192]: time="2025-05-17T00:44:03.231897020Z" level=info msg="RemoveContainer for \"bdb1b5d191c1a04c3161f152a98084a309b8353effaf530c6a8d6b52bc6ed6af\"" May 17 00:44:03.234743 env[1192]: time="2025-05-17T00:44:03.234670845Z" level=info msg="RemoveContainer for \"bdb1b5d191c1a04c3161f152a98084a309b8353effaf530c6a8d6b52bc6ed6af\" returns successfully" May 17 00:44:03.235133 kubelet[1882]: I0517 00:44:03.235091 1882 scope.go:117] "RemoveContainer" containerID="65dcf16705435ef03b5fd4cfc274df01f66944a4b1c7f5f4886997433cb28265" May 17 00:44:03.237115 env[1192]: time="2025-05-17T00:44:03.237076060Z" level=info msg="RemoveContainer for \"65dcf16705435ef03b5fd4cfc274df01f66944a4b1c7f5f4886997433cb28265\"" May 17 00:44:03.239852 env[1192]: time="2025-05-17T00:44:03.239804653Z" level=info msg="RemoveContainer for \"65dcf16705435ef03b5fd4cfc274df01f66944a4b1c7f5f4886997433cb28265\" returns successfully" May 17 00:44:03.240340 kubelet[1882]: I0517 00:44:03.240233 1882 scope.go:117] "RemoveContainer" containerID="d5b9bec9a44f3d3cc71674acc85a5262292188a4e787bf40813083a9f53dd2d5" May 17 00:44:03.241285 kubelet[1882]: I0517 00:44:03.240671 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99qzr\" (UniqueName: \"kubernetes.io/projected/0b4da2ee-22cf-4708-9d35-28a92069bcc3-kube-api-access-99qzr\") pod \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " May 17 00:44:03.241285 kubelet[1882]: I0517 00:44:03.240745 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b4da2ee-22cf-4708-9d35-28a92069bcc3-clustermesh-secrets\") pod \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " May 17 00:44:03.241285 kubelet[1882]: I0517 00:44:03.240803 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-cilium-cgroup\") pod \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " May 17 00:44:03.241285 kubelet[1882]: I0517 00:44:03.240830 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-bpf-maps\") pod \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " May 17 00:44:03.241285 kubelet[1882]: I0517 00:44:03.240899 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-etc-cni-netd\") pod \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " May 17 00:44:03.241285 kubelet[1882]: I0517 00:44:03.240957 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-lib-modules\") pod \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " May 17 00:44:03.241528 kubelet[1882]: I0517 00:44:03.240980 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-xtables-lock\") pod \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " May 17 00:44:03.241528 kubelet[1882]: I0517 00:44:03.241023 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b4da2ee-22cf-4708-9d35-28a92069bcc3-hubble-tls\") pod \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " May 17 00:44:03.241528 kubelet[1882]: I0517 00:44:03.241050 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-host-proc-sys-net\") pod \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " May 17 00:44:03.241528 kubelet[1882]: I0517 00:44:03.241106 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfj7d\" (UniqueName: \"kubernetes.io/projected/bbdd99d3-5b1f-4179-baf7-f8dc6d49b781-kube-api-access-bfj7d\") pod \"bbdd99d3-5b1f-4179-baf7-f8dc6d49b781\" (UID: \"bbdd99d3-5b1f-4179-baf7-f8dc6d49b781\") " May 17 00:44:03.241528 kubelet[1882]: I0517 00:44:03.241135 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-cni-path\") pod \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " May 17 00:44:03.241528 kubelet[1882]: I0517 00:44:03.241176 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-hostproc\") pod \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " May 17 00:44:03.241846 kubelet[1882]: I0517 00:44:03.241206 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b4da2ee-22cf-4708-9d35-28a92069bcc3-cilium-config-path\") pod \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " May 17 00:44:03.241846 kubelet[1882]: I0517 00:44:03.241244 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-cilium-run\") pod \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " May 17 00:44:03.241846 kubelet[1882]: I0517 00:44:03.241261 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-host-proc-sys-kernel\") pod \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\" (UID: \"0b4da2ee-22cf-4708-9d35-28a92069bcc3\") " May 17 00:44:03.241846 kubelet[1882]: I0517 00:44:03.241310 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bbdd99d3-5b1f-4179-baf7-f8dc6d49b781-cilium-config-path\") pod \"bbdd99d3-5b1f-4179-baf7-f8dc6d49b781\" (UID: \"bbdd99d3-5b1f-4179-baf7-f8dc6d49b781\") " May 17 00:44:03.248189 kubelet[1882]: I0517 00:44:03.248122 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0b4da2ee-22cf-4708-9d35-28a92069bcc3" (UID: "0b4da2ee-22cf-4708-9d35-28a92069bcc3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:03.249102 kubelet[1882]: I0517 00:44:03.249068 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-cni-path" (OuterVolumeSpecName: "cni-path") pod "0b4da2ee-22cf-4708-9d35-28a92069bcc3" (UID: "0b4da2ee-22cf-4708-9d35-28a92069bcc3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:03.249304 kubelet[1882]: I0517 00:44:03.249203 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-hostproc" (OuterVolumeSpecName: "hostproc") pod "0b4da2ee-22cf-4708-9d35-28a92069bcc3" (UID: "0b4da2ee-22cf-4708-9d35-28a92069bcc3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:03.249481 kubelet[1882]: I0517 00:44:03.249464 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0b4da2ee-22cf-4708-9d35-28a92069bcc3" (UID: "0b4da2ee-22cf-4708-9d35-28a92069bcc3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:03.249578 kubelet[1882]: I0517 00:44:03.249562 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0b4da2ee-22cf-4708-9d35-28a92069bcc3" (UID: "0b4da2ee-22cf-4708-9d35-28a92069bcc3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:03.249650 kubelet[1882]: I0517 00:44:03.245235 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbdd99d3-5b1f-4179-baf7-f8dc6d49b781-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bbdd99d3-5b1f-4179-baf7-f8dc6d49b781" (UID: "bbdd99d3-5b1f-4179-baf7-f8dc6d49b781"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:44:03.252242 kubelet[1882]: I0517 00:44:03.252178 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b4da2ee-22cf-4708-9d35-28a92069bcc3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0b4da2ee-22cf-4708-9d35-28a92069bcc3" (UID: "0b4da2ee-22cf-4708-9d35-28a92069bcc3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:44:03.254276 kubelet[1882]: I0517 00:44:03.254228 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0b4da2ee-22cf-4708-9d35-28a92069bcc3" (UID: "0b4da2ee-22cf-4708-9d35-28a92069bcc3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:03.254511 kubelet[1882]: I0517 00:44:03.254487 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0b4da2ee-22cf-4708-9d35-28a92069bcc3" (UID: "0b4da2ee-22cf-4708-9d35-28a92069bcc3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:03.254613 kubelet[1882]: I0517 00:44:03.254599 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0b4da2ee-22cf-4708-9d35-28a92069bcc3" (UID: "0b4da2ee-22cf-4708-9d35-28a92069bcc3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:03.254726 kubelet[1882]: I0517 00:44:03.254709 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0b4da2ee-22cf-4708-9d35-28a92069bcc3" (UID: "0b4da2ee-22cf-4708-9d35-28a92069bcc3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:03.254830 kubelet[1882]: I0517 00:44:03.254816 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0b4da2ee-22cf-4708-9d35-28a92069bcc3" (UID: "0b4da2ee-22cf-4708-9d35-28a92069bcc3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:03.256060 env[1192]: time="2025-05-17T00:44:03.255994136Z" level=info msg="RemoveContainer for \"d5b9bec9a44f3d3cc71674acc85a5262292188a4e787bf40813083a9f53dd2d5\"" May 17 00:44:03.258142 kubelet[1882]: I0517 00:44:03.256740 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b4da2ee-22cf-4708-9d35-28a92069bcc3-kube-api-access-99qzr" (OuterVolumeSpecName: "kube-api-access-99qzr") pod "0b4da2ee-22cf-4708-9d35-28a92069bcc3" (UID: "0b4da2ee-22cf-4708-9d35-28a92069bcc3"). InnerVolumeSpecName "kube-api-access-99qzr". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:44:03.258292 kubelet[1882]: I0517 00:44:03.258182 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b4da2ee-22cf-4708-9d35-28a92069bcc3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0b4da2ee-22cf-4708-9d35-28a92069bcc3" (UID: "0b4da2ee-22cf-4708-9d35-28a92069bcc3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:44:03.259817 env[1192]: time="2025-05-17T00:44:03.259747419Z" level=info msg="RemoveContainer for \"d5b9bec9a44f3d3cc71674acc85a5262292188a4e787bf40813083a9f53dd2d5\" returns successfully" May 17 00:44:03.260151 kubelet[1882]: I0517 00:44:03.260123 1882 scope.go:117] "RemoveContainer" containerID="d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19" May 17 00:44:03.260640 env[1192]: time="2025-05-17T00:44:03.260521463Z" level=error msg="ContainerStatus for \"d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19\": not found" May 17 00:44:03.260874 kubelet[1882]: E0517 00:44:03.260846 1882 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19\": not found" containerID="d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19" May 17 00:44:03.261800 kubelet[1882]: I0517 00:44:03.261362 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbdd99d3-5b1f-4179-baf7-f8dc6d49b781-kube-api-access-bfj7d" (OuterVolumeSpecName: "kube-api-access-bfj7d") pod "bbdd99d3-5b1f-4179-baf7-f8dc6d49b781" (UID: "bbdd99d3-5b1f-4179-baf7-f8dc6d49b781"). InnerVolumeSpecName "kube-api-access-bfj7d". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:44:03.262733 kubelet[1882]: I0517 00:44:03.262612 1882 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19"} err="failed to get container status \"d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19\": rpc error: code = NotFound desc = an error occurred when try to find container \"d410b65812faf59e2adb7bcf2320524effac79879c902a49655c9e699863eb19\": not found" May 17 00:44:03.262879 kubelet[1882]: I0517 00:44:03.262846 1882 scope.go:117] "RemoveContainer" containerID="8b2b3b8191c00d881a717658c0f8930dc5c0d3781e6e49b31b65be32a307ed20" May 17 00:44:03.263224 kubelet[1882]: I0517 00:44:03.263086 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b4da2ee-22cf-4708-9d35-28a92069bcc3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0b4da2ee-22cf-4708-9d35-28a92069bcc3" (UID: "0b4da2ee-22cf-4708-9d35-28a92069bcc3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:44:03.263713 env[1192]: time="2025-05-17T00:44:03.263580509Z" level=error msg="ContainerStatus for \"8b2b3b8191c00d881a717658c0f8930dc5c0d3781e6e49b31b65be32a307ed20\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b2b3b8191c00d881a717658c0f8930dc5c0d3781e6e49b31b65be32a307ed20\": not found" May 17 00:44:03.264031 kubelet[1882]: E0517 00:44:03.264004 1882 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b2b3b8191c00d881a717658c0f8930dc5c0d3781e6e49b31b65be32a307ed20\": not found" containerID="8b2b3b8191c00d881a717658c0f8930dc5c0d3781e6e49b31b65be32a307ed20" May 17 00:44:03.264110 kubelet[1882]: I0517 00:44:03.264039 1882 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b2b3b8191c00d881a717658c0f8930dc5c0d3781e6e49b31b65be32a307ed20"} err="failed to get container status \"8b2b3b8191c00d881a717658c0f8930dc5c0d3781e6e49b31b65be32a307ed20\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b2b3b8191c00d881a717658c0f8930dc5c0d3781e6e49b31b65be32a307ed20\": not found" May 17 00:44:03.264110 kubelet[1882]: I0517 00:44:03.264080 1882 scope.go:117] "RemoveContainer" containerID="bdb1b5d191c1a04c3161f152a98084a309b8353effaf530c6a8d6b52bc6ed6af" May 17 00:44:03.264471 env[1192]: time="2025-05-17T00:44:03.264351282Z" level=error msg="ContainerStatus for \"bdb1b5d191c1a04c3161f152a98084a309b8353effaf530c6a8d6b52bc6ed6af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdb1b5d191c1a04c3161f152a98084a309b8353effaf530c6a8d6b52bc6ed6af\": not found" May 17 00:44:03.264702 kubelet[1882]: E0517 00:44:03.264677 1882 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdb1b5d191c1a04c3161f152a98084a309b8353effaf530c6a8d6b52bc6ed6af\": not found" containerID="bdb1b5d191c1a04c3161f152a98084a309b8353effaf530c6a8d6b52bc6ed6af" May 17 00:44:03.264847 kubelet[1882]: I0517 00:44:03.264825 1882 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdb1b5d191c1a04c3161f152a98084a309b8353effaf530c6a8d6b52bc6ed6af"} err="failed to get container status \"bdb1b5d191c1a04c3161f152a98084a309b8353effaf530c6a8d6b52bc6ed6af\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdb1b5d191c1a04c3161f152a98084a309b8353effaf530c6a8d6b52bc6ed6af\": not found" May 17 00:44:03.265122 kubelet[1882]: I0517 00:44:03.265105 1882 scope.go:117] "RemoveContainer" containerID="65dcf16705435ef03b5fd4cfc274df01f66944a4b1c7f5f4886997433cb28265" May 17 00:44:03.265535 env[1192]: time="2025-05-17T00:44:03.265440359Z" level=error msg="ContainerStatus for \"65dcf16705435ef03b5fd4cfc274df01f66944a4b1c7f5f4886997433cb28265\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65dcf16705435ef03b5fd4cfc274df01f66944a4b1c7f5f4886997433cb28265\": not found" May 17 00:44:03.265643 kubelet[1882]: E0517 00:44:03.265621 1882 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65dcf16705435ef03b5fd4cfc274df01f66944a4b1c7f5f4886997433cb28265\": not found" containerID="65dcf16705435ef03b5fd4cfc274df01f66944a4b1c7f5f4886997433cb28265" May 17 00:44:03.265710 kubelet[1882]: I0517 00:44:03.265646 1882 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65dcf16705435ef03b5fd4cfc274df01f66944a4b1c7f5f4886997433cb28265"} err="failed to get container status \"65dcf16705435ef03b5fd4cfc274df01f66944a4b1c7f5f4886997433cb28265\": rpc error: code = NotFound desc = an error occurred when try to find container \"65dcf16705435ef03b5fd4cfc274df01f66944a4b1c7f5f4886997433cb28265\": not found" May 17 00:44:03.265710 kubelet[1882]: I0517 00:44:03.265671 1882 scope.go:117] "RemoveContainer" containerID="d5b9bec9a44f3d3cc71674acc85a5262292188a4e787bf40813083a9f53dd2d5" May 17 00:44:03.266085 env[1192]: time="2025-05-17T00:44:03.265962723Z" level=error msg="ContainerStatus for \"d5b9bec9a44f3d3cc71674acc85a5262292188a4e787bf40813083a9f53dd2d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5b9bec9a44f3d3cc71674acc85a5262292188a4e787bf40813083a9f53dd2d5\": not found" May 17 00:44:03.266165 kubelet[1882]: E0517 00:44:03.266118 1882 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5b9bec9a44f3d3cc71674acc85a5262292188a4e787bf40813083a9f53dd2d5\": not found" containerID="d5b9bec9a44f3d3cc71674acc85a5262292188a4e787bf40813083a9f53dd2d5" May 17 00:44:03.266165 kubelet[1882]: I0517 00:44:03.266139 1882 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5b9bec9a44f3d3cc71674acc85a5262292188a4e787bf40813083a9f53dd2d5"} err="failed to get container status \"d5b9bec9a44f3d3cc71674acc85a5262292188a4e787bf40813083a9f53dd2d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5b9bec9a44f3d3cc71674acc85a5262292188a4e787bf40813083a9f53dd2d5\": not found" May 17 00:44:03.266165 kubelet[1882]: I0517 00:44:03.266156 1882 scope.go:117] "RemoveContainer" containerID="94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8" May 17 00:44:03.267571 env[1192]: time="2025-05-17T00:44:03.267190372Z" level=info msg="RemoveContainer for \"94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8\"" May 17 00:44:03.269540 env[1192]: time="2025-05-17T00:44:03.269486901Z" level=info msg="RemoveContainer for \"94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8\" returns successfully" May 17 00:44:03.269993 kubelet[1882]: I0517 00:44:03.269953 1882 scope.go:117] "RemoveContainer" containerID="94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8" May 17 00:44:03.270447 env[1192]: time="2025-05-17T00:44:03.270342722Z" level=error msg="ContainerStatus for \"94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8\": not found" May 17 00:44:03.270733 kubelet[1882]: E0517 00:44:03.270704 1882 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8\": not found" containerID="94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8" May 17 00:44:03.270846 kubelet[1882]: I0517 00:44:03.270738 1882 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8"} err="failed to get container status \"94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"94fcfbba743e1b37760189e087c694721009c5acbda9f3b15cbafc5f623be1b8\": not found" May 17 00:44:03.344390 kubelet[1882]: I0517 00:44:03.342330 1882 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b4da2ee-22cf-4708-9d35-28a92069bcc3-cilium-config-path\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:03.344390 kubelet[1882]: I0517 00:44:03.342371 1882 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-cni-path\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:03.344390 kubelet[1882]: I0517 00:44:03.342384 1882 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-hostproc\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:03.344390 kubelet[1882]: I0517 00:44:03.342393 1882 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bbdd99d3-5b1f-4179-baf7-f8dc6d49b781-cilium-config-path\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:03.344390 kubelet[1882]: I0517 00:44:03.342403 1882 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-cilium-run\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:03.344390 kubelet[1882]: I0517 00:44:03.342430 1882 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:03.344390 kubelet[1882]: I0517 00:44:03.342441 1882 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b4da2ee-22cf-4708-9d35-28a92069bcc3-clustermesh-secrets\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:03.344390 kubelet[1882]: I0517 00:44:03.342453 1882 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99qzr\" (UniqueName: \"kubernetes.io/projected/0b4da2ee-22cf-4708-9d35-28a92069bcc3-kube-api-access-99qzr\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:03.345010 kubelet[1882]: I0517 00:44:03.342462 1882 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-bpf-maps\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:03.345010 kubelet[1882]: I0517 00:44:03.342470 1882 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-etc-cni-netd\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:03.345010 kubelet[1882]: I0517 00:44:03.342478 1882 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-cilium-cgroup\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:03.345010 kubelet[1882]: I0517 00:44:03.342486 1882 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-lib-modules\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:03.345010 kubelet[1882]: I0517 00:44:03.342493 1882 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-xtables-lock\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:03.345010 kubelet[1882]: I0517 00:44:03.342506 1882 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b4da2ee-22cf-4708-9d35-28a92069bcc3-hubble-tls\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:03.345010 kubelet[1882]: I0517 00:44:03.342519 1882 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b4da2ee-22cf-4708-9d35-28a92069bcc3-host-proc-sys-net\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:03.345010 kubelet[1882]: I0517 00:44:03.342531 1882 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bfj7d\" (UniqueName: \"kubernetes.io/projected/bbdd99d3-5b1f-4179-baf7-f8dc6d49b781-kube-api-access-bfj7d\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:03.532173 systemd[1]: Removed slice kubepods-besteffort-podbbdd99d3_5b1f_4179_baf7_f8dc6d49b781.slice. May 17 00:44:03.535456 systemd[1]: Removed slice kubepods-burstable-pod0b4da2ee_22cf_4708_9d35_28a92069bcc3.slice. May 17 00:44:03.535618 systemd[1]: kubepods-burstable-pod0b4da2ee_22cf_4708_9d35_28a92069bcc3.slice: Consumed 9.253s CPU time. May 17 00:44:03.792225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9c5c7f47b350a7546de7f8cef7705e8cac5fb634737a414088b0c2887c7d30e-rootfs.mount: Deactivated successfully. May 17 00:44:03.792644 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f9c5c7f47b350a7546de7f8cef7705e8cac5fb634737a414088b0c2887c7d30e-shm.mount: Deactivated successfully. May 17 00:44:03.792822 systemd[1]: var-lib-kubelet-pods-bbdd99d3\x2d5b1f\x2d4179\x2dbaf7\x2df8dc6d49b781-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbfj7d.mount: Deactivated successfully. May 17 00:44:03.793115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965-rootfs.mount: Deactivated successfully. May 17 00:44:03.793266 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965-shm.mount: Deactivated successfully. May 17 00:44:03.793441 systemd[1]: var-lib-kubelet-pods-0b4da2ee\x2d22cf\x2d4708\x2d9d35\x2d28a92069bcc3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d99qzr.mount: Deactivated successfully. May 17 00:44:03.793606 systemd[1]: var-lib-kubelet-pods-0b4da2ee\x2d22cf\x2d4708\x2d9d35\x2d28a92069bcc3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:44:03.793759 systemd[1]: var-lib-kubelet-pods-0b4da2ee\x2d22cf\x2d4708\x2d9d35\x2d28a92069bcc3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:44:03.800235 kubelet[1882]: I0517 00:44:03.800185 1882 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b4da2ee-22cf-4708-9d35-28a92069bcc3" path="/var/lib/kubelet/pods/0b4da2ee-22cf-4708-9d35-28a92069bcc3/volumes" May 17 00:44:03.801222 kubelet[1882]: I0517 00:44:03.801175 1882 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbdd99d3-5b1f-4179-baf7-f8dc6d49b781" path="/var/lib/kubelet/pods/bbdd99d3-5b1f-4179-baf7-f8dc6d49b781/volumes" May 17 00:44:04.707568 sshd[3482]: pam_unix(sshd:session): session closed for user core May 17 00:44:04.716560 systemd[1]: Started sshd@26-64.23.148.252:22-147.75.109.163:34602.service. May 17 00:44:04.719126 systemd[1]: sshd@25-64.23.148.252:22-147.75.109.163:34596.service: Deactivated successfully. May 17 00:44:04.721184 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:44:04.723854 systemd-logind[1181]: Session 26 logged out. Waiting for processes to exit. May 17 00:44:04.725837 systemd-logind[1181]: Removed session 26. May 17 00:44:04.784650 sshd[3642]: Accepted publickey for core from 147.75.109.163 port 34602 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:44:04.787695 sshd[3642]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:04.793677 systemd-logind[1181]: New session 27 of user core. May 17 00:44:04.794619 systemd[1]: Started session-27.scope. May 17 00:44:05.435477 sshd[3642]: pam_unix(sshd:session): session closed for user core May 17 00:44:05.456064 systemd[1]: Started sshd@27-64.23.148.252:22-147.75.109.163:34608.service. May 17 00:44:05.457250 systemd[1]: sshd@26-64.23.148.252:22-147.75.109.163:34602.service: Deactivated successfully. May 17 00:44:05.458645 systemd[1]: session-27.scope: Deactivated successfully. May 17 00:44:05.462228 systemd-logind[1181]: Session 27 logged out. Waiting for processes to exit. May 17 00:44:05.467232 systemd-logind[1181]: Removed session 27. May 17 00:44:05.517367 sshd[3653]: Accepted publickey for core from 147.75.109.163 port 34608 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:44:05.519900 sshd[3653]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:05.527639 systemd[1]: Started session-28.scope. May 17 00:44:05.528546 systemd-logind[1181]: New session 28 of user core. May 17 00:44:05.554574 kubelet[1882]: I0517 00:44:05.554364 1882 memory_manager.go:355] "RemoveStaleState removing state" podUID="0b4da2ee-22cf-4708-9d35-28a92069bcc3" containerName="cilium-agent" May 17 00:44:05.554574 kubelet[1882]: I0517 00:44:05.554394 1882 memory_manager.go:355] "RemoveStaleState removing state" podUID="bbdd99d3-5b1f-4179-baf7-f8dc6d49b781" containerName="cilium-operator" May 17 00:44:05.570667 systemd[1]: Created slice kubepods-burstable-podbd9e1e0e_f574_49b5_b797_c540e0f928fb.slice. May 17 00:44:05.657524 kubelet[1882]: I0517 00:44:05.657466 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bd9e1e0e-f574-49b5-b797-c540e0f928fb-cilium-ipsec-secrets\") pod \"cilium-ngf85\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " pod="kube-system/cilium-ngf85" May 17 00:44:05.657524 kubelet[1882]: I0517 00:44:05.657515 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-bpf-maps\") pod \"cilium-ngf85\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " pod="kube-system/cilium-ngf85" May 17 00:44:05.657524 kubelet[1882]: I0517 00:44:05.657540 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-cilium-cgroup\") pod \"cilium-ngf85\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " pod="kube-system/cilium-ngf85" May 17 00:44:05.657788 kubelet[1882]: I0517 00:44:05.657557 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-lib-modules\") pod \"cilium-ngf85\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " pod="kube-system/cilium-ngf85" May 17 00:44:05.657788 kubelet[1882]: I0517 00:44:05.657573 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-host-proc-sys-kernel\") pod \"cilium-ngf85\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " pod="kube-system/cilium-ngf85" May 17 00:44:05.657788 kubelet[1882]: I0517 00:44:05.657589 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-cilium-run\") pod \"cilium-ngf85\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " pod="kube-system/cilium-ngf85" May 17 00:44:05.657788 kubelet[1882]: I0517 00:44:05.657604 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd9e1e0e-f574-49b5-b797-c540e0f928fb-hubble-tls\") pod \"cilium-ngf85\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " pod="kube-system/cilium-ngf85" May 17 00:44:05.657788 kubelet[1882]: I0517 00:44:05.657621 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-etc-cni-netd\") pod \"cilium-ngf85\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " pod="kube-system/cilium-ngf85" May 17 00:44:05.657788 kubelet[1882]: I0517 00:44:05.657636 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-cni-path\") pod \"cilium-ngf85\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " pod="kube-system/cilium-ngf85" May 17 00:44:05.657990 kubelet[1882]: I0517 00:44:05.657650 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd9e1e0e-f574-49b5-b797-c540e0f928fb-clustermesh-secrets\") pod \"cilium-ngf85\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " pod="kube-system/cilium-ngf85" May 17 00:44:05.657990 kubelet[1882]: I0517 00:44:05.657664 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-host-proc-sys-net\") pod \"cilium-ngf85\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " pod="kube-system/cilium-ngf85" May 17 00:44:05.657990 kubelet[1882]: I0517 00:44:05.657681 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgn4j\" (UniqueName: \"kubernetes.io/projected/bd9e1e0e-f574-49b5-b797-c540e0f928fb-kube-api-access-sgn4j\") pod \"cilium-ngf85\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " pod="kube-system/cilium-ngf85" May 17 00:44:05.657990 kubelet[1882]: I0517 00:44:05.657700 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-hostproc\") pod \"cilium-ngf85\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " pod="kube-system/cilium-ngf85" May 17 00:44:05.657990 kubelet[1882]: I0517 00:44:05.657724 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-xtables-lock\") pod \"cilium-ngf85\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " pod="kube-system/cilium-ngf85" May 17 00:44:05.658123 kubelet[1882]: I0517 00:44:05.657744 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd9e1e0e-f574-49b5-b797-c540e0f928fb-cilium-config-path\") pod \"cilium-ngf85\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " pod="kube-system/cilium-ngf85" May 17 00:44:05.716397 sshd[3653]: pam_unix(sshd:session): session closed for user core May 17 00:44:05.721352 systemd[1]: sshd@27-64.23.148.252:22-147.75.109.163:34608.service: Deactivated successfully. May 17 00:44:05.726222 systemd[1]: session-28.scope: Deactivated successfully. May 17 00:44:05.728176 systemd-logind[1181]: Session 28 logged out. Waiting for processes to exit. May 17 00:44:05.730563 systemd[1]: Started sshd@28-64.23.148.252:22-147.75.109.163:34620.service. May 17 00:44:05.737754 systemd-logind[1181]: Removed session 28. May 17 00:44:05.752435 kubelet[1882]: E0517 00:44:05.752370 1882 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-sgn4j lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-ngf85" podUID="bd9e1e0e-f574-49b5-b797-c540e0f928fb" May 17 00:44:05.788004 env[1192]: time="2025-05-17T00:44:05.787373879Z" level=info msg="StopPodSandbox for \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\"" May 17 00:44:05.788004 env[1192]: time="2025-05-17T00:44:05.787515587Z" level=info msg="TearDown network for sandbox \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\" successfully" May 17 00:44:05.788004 env[1192]: time="2025-05-17T00:44:05.787586055Z" level=info msg="StopPodSandbox for \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\" returns successfully" May 17 00:44:05.795034 env[1192]: time="2025-05-17T00:44:05.793075702Z" level=info msg="RemovePodSandbox for \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\"" May 17 00:44:05.795034 env[1192]: time="2025-05-17T00:44:05.793141256Z" level=info msg="Forcibly stopping sandbox \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\"" May 17 00:44:05.795034 env[1192]: time="2025-05-17T00:44:05.793242659Z" level=info msg="TearDown network for sandbox \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\" successfully" May 17 00:44:05.796209 env[1192]: time="2025-05-17T00:44:05.796141204Z" level=info msg="RemovePodSandbox \"371d494756ae5b6d44ab8c2ecb6953bdc5c059adc2cc441005d65208bc4d5965\" returns successfully" May 17 00:44:05.810361 env[1192]: time="2025-05-17T00:44:05.810296454Z" level=info msg="StopPodSandbox for \"f9c5c7f47b350a7546de7f8cef7705e8cac5fb634737a414088b0c2887c7d30e\"" May 17 00:44:05.810578 env[1192]: time="2025-05-17T00:44:05.810401574Z" level=info msg="TearDown network for sandbox \"f9c5c7f47b350a7546de7f8cef7705e8cac5fb634737a414088b0c2887c7d30e\" successfully" May 17 00:44:05.810578 env[1192]: time="2025-05-17T00:44:05.810446343Z" level=info msg="StopPodSandbox for \"f9c5c7f47b350a7546de7f8cef7705e8cac5fb634737a414088b0c2887c7d30e\" returns successfully" May 17 00:44:05.813238 env[1192]: time="2025-05-17T00:44:05.811481222Z" level=info msg="RemovePodSandbox for \"f9c5c7f47b350a7546de7f8cef7705e8cac5fb634737a414088b0c2887c7d30e\"" May 17 00:44:05.813238 env[1192]: time="2025-05-17T00:44:05.811554335Z" level=info msg="Forcibly stopping sandbox \"f9c5c7f47b350a7546de7f8cef7705e8cac5fb634737a414088b0c2887c7d30e\"" May 17 00:44:05.813238 env[1192]: time="2025-05-17T00:44:05.811677456Z" level=info msg="TearDown network for sandbox \"f9c5c7f47b350a7546de7f8cef7705e8cac5fb634737a414088b0c2887c7d30e\" successfully" May 17 00:44:05.814509 env[1192]: time="2025-05-17T00:44:05.814378535Z" level=info msg="RemovePodSandbox \"f9c5c7f47b350a7546de7f8cef7705e8cac5fb634737a414088b0c2887c7d30e\" returns successfully" May 17 00:44:05.823947 sshd[3666]: Accepted publickey for core from 147.75.109.163 port 34620 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:44:05.825570 sshd[3666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:05.834847 systemd-logind[1181]: New session 29 of user core. May 17 00:44:05.835472 systemd[1]: Started session-29.scope. May 17 00:44:06.010765 kubelet[1882]: E0517 00:44:06.010647 1882 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:44:06.362829 kubelet[1882]: I0517 00:44:06.362761 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bd9e1e0e-f574-49b5-b797-c540e0f928fb-cilium-ipsec-secrets\") pod \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " May 17 00:44:06.363140 kubelet[1882]: I0517 00:44:06.363102 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd9e1e0e-f574-49b5-b797-c540e0f928fb-clustermesh-secrets\") pod \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " May 17 00:44:06.363228 kubelet[1882]: I0517 00:44:06.363215 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgn4j\" (UniqueName: \"kubernetes.io/projected/bd9e1e0e-f574-49b5-b797-c540e0f928fb-kube-api-access-sgn4j\") pod \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " May 17 00:44:06.363301 kubelet[1882]: I0517 00:44:06.363288 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-hostproc\") pod \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " May 17 00:44:06.363470 kubelet[1882]: I0517 00:44:06.363453 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-cilium-run\") pod \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " May 17 00:44:06.363555 kubelet[1882]: I0517 00:44:06.363542 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-etc-cni-netd\") pod \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " May 17 00:44:06.363624 kubelet[1882]: I0517 00:44:06.363612 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-xtables-lock\") pod \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " May 17 00:44:06.363696 kubelet[1882]: I0517 00:44:06.363678 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-cilium-cgroup\") pod \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " May 17 00:44:06.363792 kubelet[1882]: I0517 00:44:06.363779 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-lib-modules\") pod \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " May 17 00:44:06.363885 kubelet[1882]: I0517 00:44:06.363854 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-host-proc-sys-kernel\") pod \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " May 17 00:44:06.363984 kubelet[1882]: I0517 00:44:06.363967 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd9e1e0e-f574-49b5-b797-c540e0f928fb-hubble-tls\") pod \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " May 17 00:44:06.364056 kubelet[1882]: I0517 00:44:06.364044 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-host-proc-sys-net\") pod \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " May 17 00:44:06.364139 kubelet[1882]: I0517 00:44:06.364117 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-cni-path\") pod \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " May 17 00:44:06.364222 kubelet[1882]: I0517 00:44:06.364209 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd9e1e0e-f574-49b5-b797-c540e0f928fb-cilium-config-path\") pod \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " May 17 00:44:06.364300 kubelet[1882]: I0517 00:44:06.364288 1882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-bpf-maps\") pod \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\" (UID: \"bd9e1e0e-f574-49b5-b797-c540e0f928fb\") " May 17 00:44:06.364418 kubelet[1882]: I0517 00:44:06.364403 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bd9e1e0e-f574-49b5-b797-c540e0f928fb" (UID: "bd9e1e0e-f574-49b5-b797-c540e0f928fb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:06.367710 kubelet[1882]: I0517 00:44:06.367668 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd9e1e0e-f574-49b5-b797-c540e0f928fb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bd9e1e0e-f574-49b5-b797-c540e0f928fb" (UID: "bd9e1e0e-f574-49b5-b797-c540e0f928fb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:44:06.369347 systemd[1]: var-lib-kubelet-pods-bd9e1e0e\x2df574\x2d49b5\x2db797\x2dc540e0f928fb-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 17 00:44:06.369468 systemd[1]: var-lib-kubelet-pods-bd9e1e0e\x2df574\x2d49b5\x2db797\x2dc540e0f928fb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:44:06.371760 kubelet[1882]: I0517 00:44:06.371708 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd9e1e0e-f574-49b5-b797-c540e0f928fb-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "bd9e1e0e-f574-49b5-b797-c540e0f928fb" (UID: "bd9e1e0e-f574-49b5-b797-c540e0f928fb"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:44:06.371942 kubelet[1882]: I0517 00:44:06.371794 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bd9e1e0e-f574-49b5-b797-c540e0f928fb" (UID: "bd9e1e0e-f574-49b5-b797-c540e0f928fb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:06.371942 kubelet[1882]: I0517 00:44:06.371819 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-hostproc" (OuterVolumeSpecName: "hostproc") pod "bd9e1e0e-f574-49b5-b797-c540e0f928fb" (UID: "bd9e1e0e-f574-49b5-b797-c540e0f928fb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:06.371942 kubelet[1882]: I0517 00:44:06.371840 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bd9e1e0e-f574-49b5-b797-c540e0f928fb" (UID: "bd9e1e0e-f574-49b5-b797-c540e0f928fb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:06.371942 kubelet[1882]: I0517 00:44:06.371902 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bd9e1e0e-f574-49b5-b797-c540e0f928fb" (UID: "bd9e1e0e-f574-49b5-b797-c540e0f928fb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:06.371942 kubelet[1882]: I0517 00:44:06.371919 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bd9e1e0e-f574-49b5-b797-c540e0f928fb" (UID: "bd9e1e0e-f574-49b5-b797-c540e0f928fb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:06.372116 kubelet[1882]: I0517 00:44:06.371938 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bd9e1e0e-f574-49b5-b797-c540e0f928fb" (UID: "bd9e1e0e-f574-49b5-b797-c540e0f928fb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:06.372116 kubelet[1882]: I0517 00:44:06.371953 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bd9e1e0e-f574-49b5-b797-c540e0f928fb" (UID: "bd9e1e0e-f574-49b5-b797-c540e0f928fb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:06.372116 kubelet[1882]: I0517 00:44:06.371969 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bd9e1e0e-f574-49b5-b797-c540e0f928fb" (UID: "bd9e1e0e-f574-49b5-b797-c540e0f928fb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:06.372781 kubelet[1882]: I0517 00:44:06.372747 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd9e1e0e-f574-49b5-b797-c540e0f928fb-kube-api-access-sgn4j" (OuterVolumeSpecName: "kube-api-access-sgn4j") pod "bd9e1e0e-f574-49b5-b797-c540e0f928fb" (UID: "bd9e1e0e-f574-49b5-b797-c540e0f928fb"). InnerVolumeSpecName "kube-api-access-sgn4j". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:44:06.372965 kubelet[1882]: I0517 00:44:06.372948 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-cni-path" (OuterVolumeSpecName: "cni-path") pod "bd9e1e0e-f574-49b5-b797-c540e0f928fb" (UID: "bd9e1e0e-f574-49b5-b797-c540e0f928fb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:06.374317 systemd[1]: var-lib-kubelet-pods-bd9e1e0e\x2df574\x2d49b5\x2db797\x2dc540e0f928fb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsgn4j.mount: Deactivated successfully. May 17 00:44:06.375847 kubelet[1882]: I0517 00:44:06.375811 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd9e1e0e-f574-49b5-b797-c540e0f928fb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bd9e1e0e-f574-49b5-b797-c540e0f928fb" (UID: "bd9e1e0e-f574-49b5-b797-c540e0f928fb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:44:06.378785 kubelet[1882]: I0517 00:44:06.378721 1882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd9e1e0e-f574-49b5-b797-c540e0f928fb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bd9e1e0e-f574-49b5-b797-c540e0f928fb" (UID: "bd9e1e0e-f574-49b5-b797-c540e0f928fb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:44:06.465389 kubelet[1882]: I0517 00:44:06.465334 1882 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-host-proc-sys-net\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:06.465635 kubelet[1882]: I0517 00:44:06.465610 1882 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-lib-modules\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:06.465802 kubelet[1882]: I0517 00:44:06.465781 1882 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:06.465948 kubelet[1882]: I0517 00:44:06.465926 1882 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd9e1e0e-f574-49b5-b797-c540e0f928fb-hubble-tls\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:06.466070 kubelet[1882]: I0517 00:44:06.466052 1882 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-cni-path\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:06.466184 kubelet[1882]: I0517 00:44:06.466164 1882 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd9e1e0e-f574-49b5-b797-c540e0f928fb-cilium-config-path\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:06.466284 kubelet[1882]: I0517 00:44:06.466266 1882 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-bpf-maps\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:06.466379 kubelet[1882]: I0517 00:44:06.466363 1882 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bd9e1e0e-f574-49b5-b797-c540e0f928fb-cilium-ipsec-secrets\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:06.466466 kubelet[1882]: I0517 00:44:06.466453 1882 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-etc-cni-netd\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:06.466538 kubelet[1882]: I0517 00:44:06.466524 1882 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd9e1e0e-f574-49b5-b797-c540e0f928fb-clustermesh-secrets\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:06.466598 kubelet[1882]: I0517 00:44:06.466586 1882 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sgn4j\" (UniqueName: \"kubernetes.io/projected/bd9e1e0e-f574-49b5-b797-c540e0f928fb-kube-api-access-sgn4j\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:06.466684 kubelet[1882]: I0517 00:44:06.466670 1882 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-hostproc\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:06.466817 kubelet[1882]: I0517 00:44:06.466802 1882 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-cilium-run\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:06.466899 kubelet[1882]: I0517 00:44:06.466886 1882 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-xtables-lock\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:06.466962 kubelet[1882]: I0517 00:44:06.466951 1882 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd9e1e0e-f574-49b5-b797-c540e0f928fb-cilium-cgroup\") on node \"ci-3510.3.7-n-8f6d6c1823\" DevicePath \"\"" May 17 00:44:06.783986 systemd[1]: var-lib-kubelet-pods-bd9e1e0e\x2df574\x2d49b5\x2db797\x2dc540e0f928fb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:44:07.233582 systemd[1]: Removed slice kubepods-burstable-podbd9e1e0e_f574_49b5_b797_c540e0f928fb.slice. May 17 00:44:07.290456 systemd[1]: Created slice kubepods-burstable-pod3f79d71b_6344_416c_b561_dfcd898f05b7.slice. May 17 00:44:07.373479 kubelet[1882]: I0517 00:44:07.373414 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f79d71b-6344-416c-b561-dfcd898f05b7-etc-cni-netd\") pod \"cilium-fbtw2\" (UID: \"3f79d71b-6344-416c-b561-dfcd898f05b7\") " pod="kube-system/cilium-fbtw2" May 17 00:44:07.373947 kubelet[1882]: I0517 00:44:07.373488 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f79d71b-6344-416c-b561-dfcd898f05b7-cilium-config-path\") pod \"cilium-fbtw2\" (UID: \"3f79d71b-6344-416c-b561-dfcd898f05b7\") " pod="kube-system/cilium-fbtw2" May 17 00:44:07.373947 kubelet[1882]: I0517 00:44:07.373524 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3f79d71b-6344-416c-b561-dfcd898f05b7-host-proc-sys-net\") pod \"cilium-fbtw2\" (UID: \"3f79d71b-6344-416c-b561-dfcd898f05b7\") " pod="kube-system/cilium-fbtw2" May 17 00:44:07.373947 kubelet[1882]: I0517 00:44:07.373551 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3f79d71b-6344-416c-b561-dfcd898f05b7-host-proc-sys-kernel\") pod \"cilium-fbtw2\" (UID: \"3f79d71b-6344-416c-b561-dfcd898f05b7\") " pod="kube-system/cilium-fbtw2" May 17 00:44:07.373947 kubelet[1882]: I0517 00:44:07.373579 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3f79d71b-6344-416c-b561-dfcd898f05b7-cni-path\") pod \"cilium-fbtw2\" (UID: \"3f79d71b-6344-416c-b561-dfcd898f05b7\") " pod="kube-system/cilium-fbtw2" May 17 00:44:07.373947 kubelet[1882]: I0517 00:44:07.373602 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f79d71b-6344-416c-b561-dfcd898f05b7-lib-modules\") pod \"cilium-fbtw2\" (UID: \"3f79d71b-6344-416c-b561-dfcd898f05b7\") " pod="kube-system/cilium-fbtw2" May 17 00:44:07.373947 kubelet[1882]: I0517 00:44:07.373628 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3f79d71b-6344-416c-b561-dfcd898f05b7-bpf-maps\") pod \"cilium-fbtw2\" (UID: \"3f79d71b-6344-416c-b561-dfcd898f05b7\") " pod="kube-system/cilium-fbtw2" May 17 00:44:07.374138 kubelet[1882]: I0517 00:44:07.373648 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3f79d71b-6344-416c-b561-dfcd898f05b7-clustermesh-secrets\") pod \"cilium-fbtw2\" (UID: \"3f79d71b-6344-416c-b561-dfcd898f05b7\") " pod="kube-system/cilium-fbtw2" May 17 00:44:07.374138 kubelet[1882]: I0517 00:44:07.373674 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3f79d71b-6344-416c-b561-dfcd898f05b7-cilium-ipsec-secrets\") pod \"cilium-fbtw2\" (UID: \"3f79d71b-6344-416c-b561-dfcd898f05b7\") " pod="kube-system/cilium-fbtw2" May 17 00:44:07.374138 kubelet[1882]: I0517 00:44:07.373705 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3f79d71b-6344-416c-b561-dfcd898f05b7-cilium-run\") pod \"cilium-fbtw2\" (UID: \"3f79d71b-6344-416c-b561-dfcd898f05b7\") " pod="kube-system/cilium-fbtw2" May 17 00:44:07.374138 kubelet[1882]: I0517 00:44:07.373730 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3f79d71b-6344-416c-b561-dfcd898f05b7-cilium-cgroup\") pod \"cilium-fbtw2\" (UID: \"3f79d71b-6344-416c-b561-dfcd898f05b7\") " pod="kube-system/cilium-fbtw2" May 17 00:44:07.374138 kubelet[1882]: I0517 00:44:07.373758 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3f79d71b-6344-416c-b561-dfcd898f05b7-hostproc\") pod \"cilium-fbtw2\" (UID: \"3f79d71b-6344-416c-b561-dfcd898f05b7\") " pod="kube-system/cilium-fbtw2" May 17 00:44:07.374138 kubelet[1882]: I0517 00:44:07.373801 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f79d71b-6344-416c-b561-dfcd898f05b7-xtables-lock\") pod \"cilium-fbtw2\" (UID: \"3f79d71b-6344-416c-b561-dfcd898f05b7\") " pod="kube-system/cilium-fbtw2" May 17 00:44:07.374311 kubelet[1882]: I0517 00:44:07.373827 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3f79d71b-6344-416c-b561-dfcd898f05b7-hubble-tls\") pod \"cilium-fbtw2\" (UID: \"3f79d71b-6344-416c-b561-dfcd898f05b7\") " pod="kube-system/cilium-fbtw2" May 17 00:44:07.374311 kubelet[1882]: I0517 00:44:07.373852 1882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twnsf\" (UniqueName: \"kubernetes.io/projected/3f79d71b-6344-416c-b561-dfcd898f05b7-kube-api-access-twnsf\") pod \"cilium-fbtw2\" (UID: \"3f79d71b-6344-416c-b561-dfcd898f05b7\") " pod="kube-system/cilium-fbtw2" May 17 00:44:07.593539 kubelet[1882]: E0517 00:44:07.593488 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:44:07.594564 env[1192]: time="2025-05-17T00:44:07.594503149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fbtw2,Uid:3f79d71b-6344-416c-b561-dfcd898f05b7,Namespace:kube-system,Attempt:0,}" May 17 00:44:07.614553 env[1192]: time="2025-05-17T00:44:07.614416240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:44:07.614553 env[1192]: time="2025-05-17T00:44:07.614485518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:44:07.614906 env[1192]: time="2025-05-17T00:44:07.614509406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:44:07.614994 env[1192]: time="2025-05-17T00:44:07.614921471Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee7788765dca0b68cfff58f8769a1d3ea43d1fb109157670ec6e4f2451b321ad pid=3697 runtime=io.containerd.runc.v2 May 17 00:44:07.632616 systemd[1]: Started cri-containerd-ee7788765dca0b68cfff58f8769a1d3ea43d1fb109157670ec6e4f2451b321ad.scope. May 17 00:44:07.676594 env[1192]: time="2025-05-17T00:44:07.676539959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fbtw2,Uid:3f79d71b-6344-416c-b561-dfcd898f05b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee7788765dca0b68cfff58f8769a1d3ea43d1fb109157670ec6e4f2451b321ad\"" May 17 00:44:07.677859 kubelet[1882]: E0517 00:44:07.677825 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:44:07.686455 env[1192]: time="2025-05-17T00:44:07.686334788Z" level=info msg="CreateContainer within sandbox \"ee7788765dca0b68cfff58f8769a1d3ea43d1fb109157670ec6e4f2451b321ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:44:07.701804 env[1192]: time="2025-05-17T00:44:07.701709931Z" level=info msg="CreateContainer within sandbox \"ee7788765dca0b68cfff58f8769a1d3ea43d1fb109157670ec6e4f2451b321ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"35c820429ae1770152b738e9bcdaab3787ab4ee683239ebcd15505732a83847d\"" May 17 00:44:07.704602 env[1192]: time="2025-05-17T00:44:07.704091013Z" level=info msg="StartContainer for \"35c820429ae1770152b738e9bcdaab3787ab4ee683239ebcd15505732a83847d\"" May 17 00:44:07.729062 systemd[1]: Started cri-containerd-35c820429ae1770152b738e9bcdaab3787ab4ee683239ebcd15505732a83847d.scope. May 17 00:44:07.769489 env[1192]: time="2025-05-17T00:44:07.769433468Z" level=info msg="StartContainer for \"35c820429ae1770152b738e9bcdaab3787ab4ee683239ebcd15505732a83847d\" returns successfully" May 17 00:44:07.793542 systemd[1]: cri-containerd-35c820429ae1770152b738e9bcdaab3787ab4ee683239ebcd15505732a83847d.scope: Deactivated successfully. May 17 00:44:07.800559 kubelet[1882]: I0517 00:44:07.800362 1882 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd9e1e0e-f574-49b5-b797-c540e0f928fb" path="/var/lib/kubelet/pods/bd9e1e0e-f574-49b5-b797-c540e0f928fb/volumes" May 17 00:44:07.823270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35c820429ae1770152b738e9bcdaab3787ab4ee683239ebcd15505732a83847d-rootfs.mount: Deactivated successfully. May 17 00:44:07.832797 env[1192]: time="2025-05-17T00:44:07.832735827Z" level=info msg="shim disconnected" id=35c820429ae1770152b738e9bcdaab3787ab4ee683239ebcd15505732a83847d May 17 00:44:07.833234 env[1192]: time="2025-05-17T00:44:07.833207417Z" level=warning msg="cleaning up after shim disconnected" id=35c820429ae1770152b738e9bcdaab3787ab4ee683239ebcd15505732a83847d namespace=k8s.io May 17 00:44:07.833342 env[1192]: time="2025-05-17T00:44:07.833327289Z" level=info msg="cleaning up dead shim" May 17 00:44:07.844840 env[1192]: time="2025-05-17T00:44:07.843698989Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3779 runtime=io.containerd.runc.v2\n" May 17 00:44:08.233698 kubelet[1882]: E0517 00:44:08.233587 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:44:08.236216 env[1192]: time="2025-05-17T00:44:08.236145204Z" level=info msg="CreateContainer within sandbox \"ee7788765dca0b68cfff58f8769a1d3ea43d1fb109157670ec6e4f2451b321ad\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:44:08.249352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3591952559.mount: Deactivated successfully. May 17 00:44:08.257255 env[1192]: time="2025-05-17T00:44:08.257205803Z" level=info msg="CreateContainer within sandbox \"ee7788765dca0b68cfff58f8769a1d3ea43d1fb109157670ec6e4f2451b321ad\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2599a1f6606946e7d1a6707e2c33ebcf7b6fa5e2d965e2a9d65e323ea2e52055\"" May 17 00:44:08.258091 env[1192]: time="2025-05-17T00:44:08.258059212Z" level=info msg="StartContainer for \"2599a1f6606946e7d1a6707e2c33ebcf7b6fa5e2d965e2a9d65e323ea2e52055\"" May 17 00:44:08.285775 systemd[1]: Started cri-containerd-2599a1f6606946e7d1a6707e2c33ebcf7b6fa5e2d965e2a9d65e323ea2e52055.scope. May 17 00:44:08.328984 env[1192]: time="2025-05-17T00:44:08.328930952Z" level=info msg="StartContainer for \"2599a1f6606946e7d1a6707e2c33ebcf7b6fa5e2d965e2a9d65e323ea2e52055\" returns successfully" May 17 00:44:08.338966 systemd[1]: cri-containerd-2599a1f6606946e7d1a6707e2c33ebcf7b6fa5e2d965e2a9d65e323ea2e52055.scope: Deactivated successfully. May 17 00:44:08.364277 env[1192]: time="2025-05-17T00:44:08.364224794Z" level=info msg="shim disconnected" id=2599a1f6606946e7d1a6707e2c33ebcf7b6fa5e2d965e2a9d65e323ea2e52055 May 17 00:44:08.364672 env[1192]: time="2025-05-17T00:44:08.364649180Z" level=warning msg="cleaning up after shim disconnected" id=2599a1f6606946e7d1a6707e2c33ebcf7b6fa5e2d965e2a9d65e323ea2e52055 namespace=k8s.io May 17 00:44:08.364772 env[1192]: time="2025-05-17T00:44:08.364757003Z" level=info msg="cleaning up dead shim" May 17 00:44:08.375647 env[1192]: time="2025-05-17T00:44:08.375596821Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3845 runtime=io.containerd.runc.v2\n" May 17 00:44:08.517841 kubelet[1882]: I0517 00:44:08.517776 1882 setters.go:602] "Node became not ready" node="ci-3510.3.7-n-8f6d6c1823" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:44:08Z","lastTransitionTime":"2025-05-17T00:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:44:09.237178 kubelet[1882]: E0517 00:44:09.237146 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:44:09.240112 env[1192]: time="2025-05-17T00:44:09.240061313Z" level=info msg="CreateContainer within sandbox \"ee7788765dca0b68cfff58f8769a1d3ea43d1fb109157670ec6e4f2451b321ad\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:44:09.259300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3371656865.mount: Deactivated successfully. May 17 00:44:09.261621 env[1192]: time="2025-05-17T00:44:09.261575783Z" level=info msg="CreateContainer within sandbox \"ee7788765dca0b68cfff58f8769a1d3ea43d1fb109157670ec6e4f2451b321ad\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a39d7852e49c544cf7ed600cbe2f434921becc748195921bb732328db9373cad\"" May 17 00:44:09.263369 env[1192]: time="2025-05-17T00:44:09.263336254Z" level=info msg="StartContainer for \"a39d7852e49c544cf7ed600cbe2f434921becc748195921bb732328db9373cad\"" May 17 00:44:09.300010 systemd[1]: Started cri-containerd-a39d7852e49c544cf7ed600cbe2f434921becc748195921bb732328db9373cad.scope. May 17 00:44:09.348125 env[1192]: time="2025-05-17T00:44:09.348061109Z" level=info msg="StartContainer for \"a39d7852e49c544cf7ed600cbe2f434921becc748195921bb732328db9373cad\" returns successfully" May 17 00:44:09.353131 systemd[1]: cri-containerd-a39d7852e49c544cf7ed600cbe2f434921becc748195921bb732328db9373cad.scope: Deactivated successfully. May 17 00:44:09.379968 env[1192]: time="2025-05-17T00:44:09.379896379Z" level=info msg="shim disconnected" id=a39d7852e49c544cf7ed600cbe2f434921becc748195921bb732328db9373cad May 17 00:44:09.380473 env[1192]: time="2025-05-17T00:44:09.380447284Z" level=warning msg="cleaning up after shim disconnected" id=a39d7852e49c544cf7ed600cbe2f434921becc748195921bb732328db9373cad namespace=k8s.io May 17 00:44:09.380584 env[1192]: time="2025-05-17T00:44:09.380568816Z" level=info msg="cleaning up dead shim" May 17 00:44:09.395678 env[1192]: time="2025-05-17T00:44:09.395600098Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3903 runtime=io.containerd.runc.v2\n" May 17 00:44:09.784127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a39d7852e49c544cf7ed600cbe2f434921becc748195921bb732328db9373cad-rootfs.mount: Deactivated successfully. May 17 00:44:10.240963 kubelet[1882]: E0517 00:44:10.240536 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:44:10.243849 env[1192]: time="2025-05-17T00:44:10.243794643Z" level=info msg="CreateContainer within sandbox \"ee7788765dca0b68cfff58f8769a1d3ea43d1fb109157670ec6e4f2451b321ad\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:44:10.260084 env[1192]: time="2025-05-17T00:44:10.260037769Z" level=info msg="CreateContainer within sandbox \"ee7788765dca0b68cfff58f8769a1d3ea43d1fb109157670ec6e4f2451b321ad\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ca92be9855151e59c8d6e0289b1a2f98cfc00135a83125d047e99497121a47c0\"" May 17 00:44:10.260923 env[1192]: time="2025-05-17T00:44:10.260885678Z" level=info msg="StartContainer for \"ca92be9855151e59c8d6e0289b1a2f98cfc00135a83125d047e99497121a47c0\"" May 17 00:44:10.290709 systemd[1]: Started cri-containerd-ca92be9855151e59c8d6e0289b1a2f98cfc00135a83125d047e99497121a47c0.scope. May 17 00:44:10.332994 systemd[1]: cri-containerd-ca92be9855151e59c8d6e0289b1a2f98cfc00135a83125d047e99497121a47c0.scope: Deactivated successfully. May 17 00:44:10.334275 env[1192]: time="2025-05-17T00:44:10.334221889Z" level=info msg="StartContainer for \"ca92be9855151e59c8d6e0289b1a2f98cfc00135a83125d047e99497121a47c0\" returns successfully" May 17 00:44:10.364273 env[1192]: time="2025-05-17T00:44:10.364218128Z" level=info msg="shim disconnected" id=ca92be9855151e59c8d6e0289b1a2f98cfc00135a83125d047e99497121a47c0 May 17 00:44:10.364273 env[1192]: time="2025-05-17T00:44:10.364265206Z" level=warning msg="cleaning up after shim disconnected" id=ca92be9855151e59c8d6e0289b1a2f98cfc00135a83125d047e99497121a47c0 namespace=k8s.io May 17 00:44:10.364273 env[1192]: time="2025-05-17T00:44:10.364274861Z" level=info msg="cleaning up dead shim" May 17 00:44:10.374621 env[1192]: time="2025-05-17T00:44:10.374568110Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3962 runtime=io.containerd.runc.v2\n" May 17 00:44:10.784220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca92be9855151e59c8d6e0289b1a2f98cfc00135a83125d047e99497121a47c0-rootfs.mount: Deactivated successfully. May 17 00:44:11.012888 kubelet[1882]: E0517 00:44:11.012746 1882 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:44:11.245537 kubelet[1882]: E0517 00:44:11.245421 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:44:11.248619 env[1192]: time="2025-05-17T00:44:11.248253859Z" level=info msg="CreateContainer within sandbox \"ee7788765dca0b68cfff58f8769a1d3ea43d1fb109157670ec6e4f2451b321ad\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:44:11.272454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount219688807.mount: Deactivated successfully. May 17 00:44:11.278014 env[1192]: time="2025-05-17T00:44:11.277947031Z" level=info msg="CreateContainer within sandbox \"ee7788765dca0b68cfff58f8769a1d3ea43d1fb109157670ec6e4f2451b321ad\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"730a0da995799b7f3aacca8f60697bd91f8bb12c975542ee47a20026beacdb34\"" May 17 00:44:11.279404 env[1192]: time="2025-05-17T00:44:11.279360451Z" level=info msg="StartContainer for \"730a0da995799b7f3aacca8f60697bd91f8bb12c975542ee47a20026beacdb34\"" May 17 00:44:11.311068 systemd[1]: Started cri-containerd-730a0da995799b7f3aacca8f60697bd91f8bb12c975542ee47a20026beacdb34.scope. May 17 00:44:11.360338 env[1192]: time="2025-05-17T00:44:11.360140075Z" level=info msg="StartContainer for \"730a0da995799b7f3aacca8f60697bd91f8bb12c975542ee47a20026beacdb34\" returns successfully" May 17 00:44:11.909904 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 17 00:44:12.250610 kubelet[1882]: E0517 00:44:12.250460 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:44:12.273800 kubelet[1882]: I0517 00:44:12.273735 1882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fbtw2" podStartSLOduration=5.273704213 podStartE2EDuration="5.273704213s" podCreationTimestamp="2025-05-17 00:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:44:12.27197836 +0000 UTC m=+126.724457101" watchObservedRunningTime="2025-05-17 00:44:12.273704213 +0000 UTC m=+126.726182955" May 17 00:44:13.595639 kubelet[1882]: E0517 00:44:13.595585 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:44:14.198331 systemd[1]: run-containerd-runc-k8s.io-730a0da995799b7f3aacca8f60697bd91f8bb12c975542ee47a20026beacdb34-runc.FBUyie.mount: Deactivated successfully. May 17 00:44:15.226939 systemd-networkd[1003]: lxc_health: Link UP May 17 00:44:15.237210 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:44:15.241191 systemd-networkd[1003]: lxc_health: Gained carrier May 17 00:44:15.597458 kubelet[1882]: E0517 00:44:15.597407 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:44:16.258520 kubelet[1882]: E0517 00:44:16.258475 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:44:16.420456 systemd[1]: run-containerd-runc-k8s.io-730a0da995799b7f3aacca8f60697bd91f8bb12c975542ee47a20026beacdb34-runc.7UVcIR.mount: Deactivated successfully. May 17 00:44:16.468006 systemd-networkd[1003]: lxc_health: Gained IPv6LL May 17 00:44:17.260085 kubelet[1882]: E0517 00:44:17.260044 1882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:44:18.664008 systemd[1]: run-containerd-runc-k8s.io-730a0da995799b7f3aacca8f60697bd91f8bb12c975542ee47a20026beacdb34-runc.7idRnK.mount: Deactivated successfully. May 17 00:44:20.810500 systemd[1]: run-containerd-runc-k8s.io-730a0da995799b7f3aacca8f60697bd91f8bb12c975542ee47a20026beacdb34-runc.TxRUvy.mount: Deactivated successfully. May 17 00:44:20.959465 sshd[3666]: pam_unix(sshd:session): session closed for user core May 17 00:44:20.962624 systemd[1]: sshd@28-64.23.148.252:22-147.75.109.163:34620.service: Deactivated successfully. May 17 00:44:20.963592 systemd[1]: session-29.scope: Deactivated successfully. May 17 00:44:20.964524 systemd-logind[1181]: Session 29 logged out. Waiting for processes to exit. May 17 00:44:20.965431 systemd-logind[1181]: Removed session 29.