Mar 17 18:40:24.394161 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025 Mar 17 18:40:24.394223 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:40:24.394241 kernel: BIOS-provided physical RAM map: Mar 17 18:40:24.394250 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 18:40:24.394259 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 18:40:24.394268 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 18:40:24.394278 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Mar 17 18:40:24.394288 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Mar 17 18:40:24.394301 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 18:40:24.394310 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 18:40:24.394320 kernel: NX (Execute Disable) protection: active Mar 17 18:40:24.394329 kernel: SMBIOS 2.8 present. Mar 17 18:40:24.394338 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Mar 17 18:40:24.394348 kernel: Hypervisor detected: KVM Mar 17 18:40:24.394359 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 18:40:24.394374 kernel: kvm-clock: cpu 0, msr 7019a001, primary cpu clock Mar 17 18:40:24.394383 kernel: kvm-clock: using sched offset of 4391247662 cycles Mar 17 18:40:24.394394 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 18:40:24.394405 kernel: tsc: Detected 2494.170 MHz processor Mar 17 18:40:24.394415 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 18:40:24.394426 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 18:40:24.394436 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Mar 17 18:40:24.394446 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 18:40:24.394460 kernel: ACPI: Early table checksum verification disabled Mar 17 18:40:24.394470 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Mar 17 18:40:24.394480 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:40:24.394490 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:40:24.394500 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:40:24.394511 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 17 18:40:24.394521 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:40:24.394537 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:40:24.394548 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:40:24.394562 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:40:24.394572 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Mar 17 18:40:24.394582 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Mar 17 18:40:24.394592 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 17 18:40:24.394602 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Mar 17 18:40:24.394613 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Mar 17 18:40:24.394623 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Mar 17 18:40:24.394634 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Mar 17 18:40:24.394655 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 18:40:24.394666 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 18:40:24.394677 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 17 18:40:24.394688 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 17 18:40:24.394700 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Mar 17 18:40:24.394711 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Mar 17 18:40:24.394726 kernel: Zone ranges: Mar 17 18:40:24.394737 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 18:40:24.394748 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Mar 17 18:40:24.394759 kernel: Normal empty Mar 17 18:40:24.394770 kernel: Movable zone start for each node Mar 17 18:40:24.394781 kernel: Early memory node ranges Mar 17 18:40:24.394792 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 18:40:24.394803 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Mar 17 18:40:24.394816 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Mar 17 18:40:24.394832 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 18:40:24.394847 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 18:40:24.394858 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Mar 17 18:40:24.394869 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 18:40:24.394880 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 18:40:24.394891 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 18:40:24.394903 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 18:40:24.394914 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 18:40:24.394925 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 18:40:24.394941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 18:40:24.394952 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 18:40:24.394963 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 18:40:24.394975 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 18:40:24.394987 kernel: TSC deadline timer available Mar 17 18:40:24.394998 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 18:40:24.395010 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Mar 17 18:40:24.395021 kernel: Booting paravirtualized kernel on KVM Mar 17 18:40:24.395032 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 18:40:24.395048 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Mar 17 18:40:24.395059 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Mar 17 18:40:24.395070 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Mar 17 18:40:24.395086 kernel: pcpu-alloc: [0] 0 1 Mar 17 18:40:24.395097 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Mar 17 18:40:24.395108 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 17 18:40:24.395136 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Mar 17 18:40:24.399941 kernel: Policy zone: DMA32 Mar 17 18:40:24.399972 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:40:24.400000 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:40:24.400013 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:40:24.400027 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 18:40:24.400040 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:40:24.400055 kernel: Memory: 1973276K/2096612K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 123076K reserved, 0K cma-reserved) Mar 17 18:40:24.400068 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 18:40:24.400078 kernel: Kernel/User page tables isolation: enabled Mar 17 18:40:24.400087 kernel: ftrace: allocating 34580 entries in 136 pages Mar 17 18:40:24.400099 kernel: ftrace: allocated 136 pages with 2 groups Mar 17 18:40:24.400108 kernel: rcu: Hierarchical RCU implementation. Mar 17 18:40:24.400118 kernel: rcu: RCU event tracing is enabled. Mar 17 18:40:24.400127 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 18:40:24.400136 kernel: Rude variant of Tasks RCU enabled. Mar 17 18:40:24.400157 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:40:24.400166 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:40:24.400175 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 18:40:24.400184 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 18:40:24.400206 kernel: random: crng init done Mar 17 18:40:24.400218 kernel: Console: colour VGA+ 80x25 Mar 17 18:40:24.400230 kernel: printk: console [tty0] enabled Mar 17 18:40:24.400244 kernel: printk: console [ttyS0] enabled Mar 17 18:40:24.400257 kernel: ACPI: Core revision 20210730 Mar 17 18:40:24.400270 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 18:40:24.400282 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 18:40:24.400291 kernel: x2apic enabled Mar 17 18:40:24.400299 kernel: Switched APIC routing to physical x2apic. Mar 17 18:40:24.400308 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 18:40:24.400321 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3b633397, max_idle_ns: 440795206106 ns Mar 17 18:40:24.400330 kernel: Calibrating delay loop (skipped) preset value.. 4988.34 BogoMIPS (lpj=2494170) Mar 17 18:40:24.400349 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 17 18:40:24.400358 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 17 18:40:24.400367 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 18:40:24.400376 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 18:40:24.400384 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 18:40:24.400393 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 18:40:24.400405 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 17 18:40:24.400425 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 18:40:24.400436 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Mar 17 18:40:24.400457 kernel: MDS: Mitigation: Clear CPU buffers Mar 17 18:40:24.400469 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 18:40:24.400482 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 18:40:24.400499 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 18:40:24.400511 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 18:40:24.400524 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 18:40:24.400536 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 18:40:24.400555 kernel: Freeing SMP alternatives memory: 32K Mar 17 18:40:24.400569 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:40:24.400583 kernel: LSM: Security Framework initializing Mar 17 18:40:24.400598 kernel: SELinux: Initializing. Mar 17 18:40:24.400612 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 18:40:24.400623 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 18:40:24.400635 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Mar 17 18:40:24.400654 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Mar 17 18:40:24.400665 kernel: signal: max sigframe size: 1776 Mar 17 18:40:24.400676 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:40:24.400689 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 18:40:24.400702 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:40:24.400715 kernel: x86: Booting SMP configuration: Mar 17 18:40:24.400729 kernel: .... node #0, CPUs: #1 Mar 17 18:40:24.400741 kernel: kvm-clock: cpu 1, msr 7019a041, secondary cpu clock Mar 17 18:40:24.400753 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Mar 17 18:40:24.400773 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 18:40:24.400788 kernel: smpboot: Max logical packages: 1 Mar 17 18:40:24.400801 kernel: smpboot: Total of 2 processors activated (9976.68 BogoMIPS) Mar 17 18:40:24.400813 kernel: devtmpfs: initialized Mar 17 18:40:24.400826 kernel: x86/mm: Memory block size: 128MB Mar 17 18:40:24.400838 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:40:24.400851 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 18:40:24.400863 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:40:24.400876 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:40:24.400895 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:40:24.400908 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:40:24.400923 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 18:40:24.400936 kernel: audit: type=2000 audit(1742236822.802:1): state=initialized audit_enabled=0 res=1 Mar 17 18:40:24.401051 kernel: cpuidle: using governor menu Mar 17 18:40:24.401063 kernel: ACPI: bus type PCI registered Mar 17 18:40:24.401075 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:40:24.401087 kernel: dca service started, version 1.12.1 Mar 17 18:40:24.401098 kernel: PCI: Using configuration type 1 for base access Mar 17 18:40:24.401143 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 18:40:24.401155 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:40:24.401168 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:40:24.401182 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:40:24.401193 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:40:24.401205 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:40:24.401217 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:40:24.401230 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:40:24.401251 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:40:24.401271 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:40:24.401285 kernel: ACPI: Interpreter enabled Mar 17 18:40:24.401301 kernel: ACPI: PM: (supports S0 S5) Mar 17 18:40:24.401313 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 18:40:24.401325 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 18:40:24.401338 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 17 18:40:24.401350 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 18:40:24.401777 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:40:24.401905 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Mar 17 18:40:24.401924 kernel: acpiphp: Slot [3] registered Mar 17 18:40:24.401937 kernel: acpiphp: Slot [4] registered Mar 17 18:40:24.402028 kernel: acpiphp: Slot [5] registered Mar 17 18:40:24.402052 kernel: acpiphp: Slot [6] registered Mar 17 18:40:24.402065 kernel: acpiphp: Slot [7] registered Mar 17 18:40:24.402079 kernel: acpiphp: Slot [8] registered Mar 17 18:40:24.402091 kernel: acpiphp: Slot [9] registered Mar 17 18:40:24.402104 kernel: acpiphp: Slot [10] registered Mar 17 18:40:24.404273 kernel: acpiphp: Slot [11] registered Mar 17 18:40:24.404287 kernel: acpiphp: Slot [12] registered Mar 17 18:40:24.404306 kernel: acpiphp: Slot [13] registered Mar 17 18:40:24.404319 kernel: acpiphp: Slot [14] registered Mar 17 18:40:24.404333 kernel: acpiphp: Slot [15] registered Mar 17 18:40:24.404345 kernel: acpiphp: Slot [16] registered Mar 17 18:40:24.404358 kernel: acpiphp: Slot [17] registered Mar 17 18:40:24.404372 kernel: acpiphp: Slot [18] registered Mar 17 18:40:24.404386 kernel: acpiphp: Slot [19] registered Mar 17 18:40:24.404411 kernel: acpiphp: Slot [20] registered Mar 17 18:40:24.404424 kernel: acpiphp: Slot [21] registered Mar 17 18:40:24.404437 kernel: acpiphp: Slot [22] registered Mar 17 18:40:24.404449 kernel: acpiphp: Slot [23] registered Mar 17 18:40:24.404461 kernel: acpiphp: Slot [24] registered Mar 17 18:40:24.404475 kernel: acpiphp: Slot [25] registered Mar 17 18:40:24.404488 kernel: acpiphp: Slot [26] registered Mar 17 18:40:24.404498 kernel: acpiphp: Slot [27] registered Mar 17 18:40:24.404508 kernel: acpiphp: Slot [28] registered Mar 17 18:40:24.404518 kernel: acpiphp: Slot [29] registered Mar 17 18:40:24.404531 kernel: acpiphp: Slot [30] registered Mar 17 18:40:24.404541 kernel: acpiphp: Slot [31] registered Mar 17 18:40:24.404550 kernel: PCI host bridge to bus 0000:00 Mar 17 18:40:24.404820 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 18:40:24.404954 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 18:40:24.405090 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 18:40:24.405219 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 17 18:40:24.405326 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Mar 17 18:40:24.405454 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 18:40:24.405656 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 17 18:40:24.405809 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 17 18:40:24.405998 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Mar 17 18:40:24.406153 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Mar 17 18:40:24.406323 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 17 18:40:24.406464 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 17 18:40:24.406612 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 17 18:40:24.406761 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 17 18:40:24.406892 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Mar 17 18:40:24.407004 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Mar 17 18:40:24.407204 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 17 18:40:24.407369 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Mar 17 18:40:24.407522 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Mar 17 18:40:24.407769 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Mar 17 18:40:24.411455 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Mar 17 18:40:24.411697 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Mar 17 18:40:24.411866 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Mar 17 18:40:24.412032 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Mar 17 18:40:24.415441 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 18:40:24.415807 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 17 18:40:24.415989 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Mar 17 18:40:24.416316 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Mar 17 18:40:24.416501 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Mar 17 18:40:24.416690 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 18:40:24.416879 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Mar 17 18:40:24.417228 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Mar 17 18:40:24.417393 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Mar 17 18:40:24.417561 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Mar 17 18:40:24.417708 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Mar 17 18:40:24.417852 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Mar 17 18:40:24.417994 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Mar 17 18:40:24.418211 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Mar 17 18:40:24.418357 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 18:40:24.418502 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Mar 17 18:40:24.418616 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Mar 17 18:40:24.418731 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Mar 17 18:40:24.418829 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Mar 17 18:40:24.418925 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Mar 17 18:40:24.419031 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Mar 17 18:40:24.419159 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Mar 17 18:40:24.419254 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Mar 17 18:40:24.419391 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Mar 17 18:40:24.419408 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 18:40:24.419421 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 18:40:24.419436 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 18:40:24.419453 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 18:40:24.419462 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 17 18:40:24.419472 kernel: iommu: Default domain type: Translated Mar 17 18:40:24.419481 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 18:40:24.419587 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Mar 17 18:40:24.419684 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 18:40:24.419780 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Mar 17 18:40:24.419792 kernel: vgaarb: loaded Mar 17 18:40:24.419801 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:40:24.419815 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:40:24.419825 kernel: PTP clock support registered Mar 17 18:40:24.419834 kernel: PCI: Using ACPI for IRQ routing Mar 17 18:40:24.419843 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 18:40:24.419853 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 18:40:24.419862 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Mar 17 18:40:24.419871 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 18:40:24.419881 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 18:40:24.419895 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 18:40:24.419904 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:40:24.419914 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:40:24.419924 kernel: pnp: PnP ACPI init Mar 17 18:40:24.419940 kernel: pnp: PnP ACPI: found 4 devices Mar 17 18:40:24.419953 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 18:40:24.419964 kernel: NET: Registered PF_INET protocol family Mar 17 18:40:24.419979 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:40:24.419993 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 17 18:40:24.420013 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:40:24.420026 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 18:40:24.420038 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Mar 17 18:40:24.420051 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 17 18:40:24.420065 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 18:40:24.420078 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 18:40:24.420092 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:40:24.420105 kernel: NET: Registered PF_XDP protocol family Mar 17 18:40:24.424742 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 18:40:24.424965 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 18:40:24.425095 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 18:40:24.425273 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 17 18:40:24.425400 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Mar 17 18:40:24.425802 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Mar 17 18:40:24.425978 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 17 18:40:24.426172 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Mar 17 18:40:24.426196 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 17 18:40:24.426361 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 54018 usecs Mar 17 18:40:24.426381 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:40:24.426396 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 18:40:24.426413 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3b633397, max_idle_ns: 440795206106 ns Mar 17 18:40:24.426429 kernel: Initialise system trusted keyrings Mar 17 18:40:24.426446 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 17 18:40:24.426463 kernel: Key type asymmetric registered Mar 17 18:40:24.426476 kernel: Asymmetric key parser 'x509' registered Mar 17 18:40:24.426490 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:40:24.426509 kernel: io scheduler mq-deadline registered Mar 17 18:40:24.426524 kernel: io scheduler kyber registered Mar 17 18:40:24.426537 kernel: io scheduler bfq registered Mar 17 18:40:24.426549 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 18:40:24.426562 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Mar 17 18:40:24.426575 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 17 18:40:24.426589 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 17 18:40:24.426602 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:40:24.426615 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 18:40:24.426636 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 18:40:24.426650 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 18:40:24.426663 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 18:40:24.426676 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 18:40:24.426945 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 17 18:40:24.429253 kernel: rtc_cmos 00:03: registered as rtc0 Mar 17 18:40:24.429625 kernel: rtc_cmos 00:03: setting system clock to 2025-03-17T18:40:23 UTC (1742236823) Mar 17 18:40:24.429817 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 17 18:40:24.429840 kernel: intel_pstate: CPU model not supported Mar 17 18:40:24.429854 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:40:24.429868 kernel: Segment Routing with IPv6 Mar 17 18:40:24.429881 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:40:24.429895 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:40:24.429910 kernel: Key type dns_resolver registered Mar 17 18:40:24.429924 kernel: IPI shorthand broadcast: enabled Mar 17 18:40:24.429937 kernel: sched_clock: Marking stable (896003199, 123803150)->(1370401495, -350595146) Mar 17 18:40:24.429951 kernel: registered taskstats version 1 Mar 17 18:40:24.429974 kernel: Loading compiled-in X.509 certificates Mar 17 18:40:24.429988 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220' Mar 17 18:40:24.430003 kernel: Key type .fscrypt registered Mar 17 18:40:24.430019 kernel: Key type fscrypt-provisioning registered Mar 17 18:40:24.430035 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:40:24.430049 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:40:24.430062 kernel: ima: No architecture policies found Mar 17 18:40:24.430092 kernel: clk: Disabling unused clocks Mar 17 18:40:24.430111 kernel: Freeing unused kernel image (initmem) memory: 47472K Mar 17 18:40:24.430125 kernel: Write protecting the kernel read-only data: 28672k Mar 17 18:40:24.430158 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Mar 17 18:40:24.430172 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Mar 17 18:40:24.430186 kernel: Run /init as init process Mar 17 18:40:24.430199 kernel: with arguments: Mar 17 18:40:24.430249 kernel: /init Mar 17 18:40:24.430268 kernel: with environment: Mar 17 18:40:24.430282 kernel: HOME=/ Mar 17 18:40:24.430303 kernel: TERM=linux Mar 17 18:40:24.430320 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:40:24.430343 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:40:24.430364 systemd[1]: Detected virtualization kvm. Mar 17 18:40:24.430378 systemd[1]: Detected architecture x86-64. Mar 17 18:40:24.430391 systemd[1]: Running in initrd. Mar 17 18:40:24.430404 systemd[1]: No hostname configured, using default hostname. Mar 17 18:40:24.430418 systemd[1]: Hostname set to . Mar 17 18:40:24.430440 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:40:24.430457 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:40:24.430472 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:40:24.430487 systemd[1]: Reached target cryptsetup.target. Mar 17 18:40:24.430501 systemd[1]: Reached target paths.target. Mar 17 18:40:24.430515 systemd[1]: Reached target slices.target. Mar 17 18:40:24.430530 systemd[1]: Reached target swap.target. Mar 17 18:40:24.430543 systemd[1]: Reached target timers.target. Mar 17 18:40:24.430568 systemd[1]: Listening on iscsid.socket. Mar 17 18:40:24.430583 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:40:24.430599 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:40:24.430615 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:40:24.430629 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:40:24.430644 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:40:24.430662 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:40:24.430677 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:40:24.430699 systemd[1]: Reached target sockets.target. Mar 17 18:40:24.430714 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:40:24.430733 systemd[1]: Finished network-cleanup.service. Mar 17 18:40:24.430748 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:40:24.430762 systemd[1]: Starting systemd-journald.service... Mar 17 18:40:24.430781 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:40:24.430797 systemd[1]: Starting systemd-resolved.service... Mar 17 18:40:24.430811 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:40:24.430825 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:40:24.430839 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:40:24.430855 kernel: audit: type=1130 audit(1742236824.378:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:24.430871 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:40:24.430901 systemd-journald[183]: Journal started Mar 17 18:40:24.431042 systemd-journald[183]: Runtime Journal (/run/log/journal/53e68137e32240fca4eea17f189b5600) is 4.9M, max 39.5M, 34.5M free. Mar 17 18:40:24.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:24.403830 systemd-modules-load[184]: Inserted module 'overlay' Mar 17 18:40:24.461016 systemd[1]: Started systemd-journald.service. Mar 17 18:40:24.461068 kernel: audit: type=1130 audit(1742236824.457:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:24.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:24.444261 systemd-resolved[185]: Positive Trust Anchors: Mar 17 18:40:24.467454 kernel: audit: type=1130 audit(1742236824.461:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:24.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:24.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:24.444287 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:40:24.472818 kernel: audit: type=1130 audit(1742236824.467:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:24.474347 kernel: audit: type=1130 audit(1742236824.472:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:24.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:24.444337 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:40:24.449075 systemd-resolved[185]: Defaulting to hostname 'linux'. Mar 17 18:40:24.460742 systemd[1]: Started systemd-resolved.service. Mar 17 18:40:24.462021 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:40:24.471561 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:40:24.475309 systemd[1]: Reached target nss-lookup.target. Mar 17 18:40:24.481868 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:40:24.489259 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:40:24.513439 kernel: Bridge firewalling registered Mar 17 18:40:24.513640 systemd-modules-load[184]: Inserted module 'br_netfilter' Mar 17 18:40:24.527082 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:40:24.541193 kernel: audit: type=1130 audit(1742236824.526:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:24.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:24.534626 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:40:24.568154 kernel: SCSI subsystem initialized Mar 17 18:40:24.573344 dracut-cmdline[201]: dracut-dracut-053 Mar 17 18:40:24.581569 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:40:24.591098 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:40:24.591238 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:40:24.607829 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:40:24.615884 systemd-modules-load[184]: Inserted module 'dm_multipath' Mar 17 18:40:24.617477 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:40:24.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:24.620663 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:40:24.624828 kernel: audit: type=1130 audit(1742236824.618:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:24.639475 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:40:24.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:24.646663 kernel: audit: type=1130 audit(1742236824.639:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:24.856214 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:40:24.888319 kernel: iscsi: registered transport (tcp) Mar 17 18:40:24.925671 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:40:24.925819 kernel: QLogic iSCSI HBA Driver Mar 17 18:40:25.057818 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:40:25.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:25.065243 kernel: audit: type=1130 audit(1742236825.058:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:25.060493 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:40:25.155260 kernel: raid6: avx2x4 gen() 13085 MB/s Mar 17 18:40:25.201011 kernel: raid6: avx2x4 xor() 5164 MB/s Mar 17 18:40:25.201182 kernel: raid6: avx2x2 gen() 12902 MB/s Mar 17 18:40:25.219214 kernel: raid6: avx2x2 xor() 10798 MB/s Mar 17 18:40:25.265014 kernel: raid6: avx2x1 gen() 11608 MB/s Mar 17 18:40:25.265171 kernel: raid6: avx2x1 xor() 9037 MB/s Mar 17 18:40:25.294207 kernel: raid6: sse2x4 gen() 6770 MB/s Mar 17 18:40:25.321023 kernel: raid6: sse2x4 xor() 4119 MB/s Mar 17 18:40:25.364993 kernel: raid6: sse2x2 gen() 6837 MB/s Mar 17 18:40:25.365157 kernel: raid6: sse2x2 xor() 4963 MB/s Mar 17 18:40:25.389845 kernel: raid6: sse2x1 gen() 5783 MB/s Mar 17 18:40:25.407069 kernel: raid6: sse2x1 xor() 4614 MB/s Mar 17 18:40:25.407271 kernel: raid6: using algorithm avx2x4 gen() 13085 MB/s Mar 17 18:40:25.407296 kernel: raid6: .... xor() 5164 MB/s, rmw enabled Mar 17 18:40:25.408098 kernel: raid6: using avx2x2 recovery algorithm Mar 17 18:40:25.446851 kernel: xor: automatically using best checksumming function avx Mar 17 18:40:25.613512 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Mar 17 18:40:25.660649 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:40:25.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:25.665000 audit: BPF prog-id=7 op=LOAD Mar 17 18:40:25.665000 audit: BPF prog-id=8 op=LOAD Mar 17 18:40:25.666761 systemd[1]: Starting systemd-udevd.service... Mar 17 18:40:25.695814 systemd-udevd[384]: Using default interface naming scheme 'v252'. Mar 17 18:40:25.708538 systemd[1]: Started systemd-udevd.service. Mar 17 18:40:25.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:25.715390 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:40:25.753527 dracut-pre-trigger[393]: rd.md=0: removing MD RAID activation Mar 17 18:40:25.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:25.875266 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:40:25.878513 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:40:25.998718 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:40:25.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:26.161521 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 17 18:40:26.327104 kernel: scsi host0: Virtio SCSI HBA Mar 17 18:40:26.327404 kernel: ACPI: bus type USB registered Mar 17 18:40:26.327427 kernel: usbcore: registered new interface driver usbfs Mar 17 18:40:26.327447 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:40:26.327468 kernel: GPT:9289727 != 125829119 Mar 17 18:40:26.327486 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:40:26.327504 kernel: GPT:9289727 != 125829119 Mar 17 18:40:26.327520 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:40:26.327544 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:40:26.327561 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:40:26.327577 kernel: libata version 3.00 loaded. Mar 17 18:40:26.327594 kernel: ata_piix 0000:00:01.1: version 2.13 Mar 17 18:40:26.327790 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 18:40:26.327811 kernel: AES CTR mode by8 optimization enabled Mar 17 18:40:26.327830 kernel: scsi host1: ata_piix Mar 17 18:40:26.328023 kernel: scsi host2: ata_piix Mar 17 18:40:26.328276 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Mar 17 18:40:26.328309 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Mar 17 18:40:26.328326 kernel: usbcore: registered new interface driver hub Mar 17 18:40:26.328345 kernel: usbcore: registered new device driver usb Mar 17 18:40:26.334537 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Mar 17 18:40:26.543669 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:40:26.544550 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (430) Mar 17 18:40:26.555300 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:40:26.557171 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:40:26.563167 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Mar 17 18:40:26.565343 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:40:26.570173 kernel: ehci-pci: EHCI PCI platform driver Mar 17 18:40:26.578956 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:40:26.580668 kernel: uhci_hcd: USB Universal Host Controller Interface driver Mar 17 18:40:26.587546 systemd[1]: Starting disk-uuid.service... Mar 17 18:40:26.606453 disk-uuid[511]: Primary Header is updated. Mar 17 18:40:26.606453 disk-uuid[511]: Secondary Entries is updated. Mar 17 18:40:26.606453 disk-uuid[511]: Secondary Header is updated. Mar 17 18:40:26.625257 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Mar 17 18:40:26.626392 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Mar 17 18:40:26.627776 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Mar 17 18:40:26.629055 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Mar 17 18:40:26.630009 kernel: hub 1-0:1.0: USB hub found Mar 17 18:40:26.630921 kernel: hub 1-0:1.0: 2 ports detected Mar 17 18:40:26.644261 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:40:26.701322 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:40:27.681225 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:40:27.682038 disk-uuid[512]: The operation has completed successfully. Mar 17 18:40:27.764027 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:40:27.771983 systemd[1]: Finished disk-uuid.service. Mar 17 18:40:27.780428 kernel: kauditd_printk_skb: 6 callbacks suppressed Mar 17 18:40:27.780524 kernel: audit: type=1130 audit(1742236827.773:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:27.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:27.780879 systemd[1]: Starting verity-setup.service... Mar 17 18:40:27.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:27.789550 kernel: audit: type=1131 audit(1742236827.773:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:27.825275 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 17 18:40:27.931725 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:40:27.935071 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:40:27.949371 systemd[1]: Finished verity-setup.service. Mar 17 18:40:27.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:27.954999 kernel: audit: type=1130 audit(1742236827.948:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:28.119190 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:40:28.121014 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:40:28.133852 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:40:28.136398 systemd[1]: Starting ignition-setup.service... Mar 17 18:40:28.146712 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:40:28.176722 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:40:28.179264 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:40:28.179355 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:40:28.223746 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:40:28.273083 systemd[1]: Finished ignition-setup.service. Mar 17 18:40:28.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:28.287989 kernel: audit: type=1130 audit(1742236828.274:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:28.283316 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:40:28.514968 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:40:28.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:28.521262 kernel: audit: type=1130 audit(1742236828.515:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:28.521361 kernel: audit: type=1334 audit(1742236828.519:22): prog-id=9 op=LOAD Mar 17 18:40:28.519000 audit: BPF prog-id=9 op=LOAD Mar 17 18:40:28.521088 systemd[1]: Starting systemd-networkd.service... Mar 17 18:40:28.583583 systemd-networkd[685]: lo: Link UP Mar 17 18:40:28.583597 systemd-networkd[685]: lo: Gained carrier Mar 17 18:40:28.597311 kernel: audit: type=1130 audit(1742236828.588:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:28.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:28.585408 systemd-networkd[685]: Enumeration completed Mar 17 18:40:28.586393 systemd[1]: Started systemd-networkd.service. Mar 17 18:40:28.586647 systemd-networkd[685]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:40:28.588817 systemd[1]: Reached target network.target. Mar 17 18:40:28.590339 systemd-networkd[685]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Mar 17 18:40:28.602196 systemd[1]: Starting iscsiuio.service... Mar 17 18:40:28.613276 systemd-networkd[685]: eth1: Link UP Mar 17 18:40:28.613291 systemd-networkd[685]: eth1: Gained carrier Mar 17 18:40:28.628819 systemd-networkd[685]: eth0: Link UP Mar 17 18:40:28.628841 systemd-networkd[685]: eth0: Gained carrier Mar 17 18:40:28.651841 kernel: audit: type=1130 audit(1742236828.645:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:28.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:28.645982 systemd[1]: Started iscsiuio.service. Mar 17 18:40:28.651237 systemd[1]: Starting iscsid.service... Mar 17 18:40:28.671733 iscsid[690]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:40:28.671733 iscsid[690]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:40:28.671733 iscsid[690]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:40:28.671733 iscsid[690]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:40:28.671733 iscsid[690]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:40:28.671733 iscsid[690]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:40:28.721523 kernel: audit: type=1130 audit(1742236828.682:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:28.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:28.710380 ignition[602]: Ignition 2.14.0 Mar 17 18:40:28.672922 systemd-networkd[685]: eth1: DHCPv4 address 10.124.0.23/20 acquired from 169.254.169.253 Mar 17 18:40:28.710405 ignition[602]: Stage: fetch-offline Mar 17 18:40:28.737499 kernel: audit: type=1130 audit(1742236828.728:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:28.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:28.680262 systemd[1]: Started iscsid.service. Mar 17 18:40:28.710994 ignition[602]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:40:28.682471 systemd-networkd[685]: eth0: DHCPv4 address 134.199.210.114/20, gateway 134.199.208.1 acquired from 169.254.169.253 Mar 17 18:40:28.711050 ignition[602]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:40:28.690364 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:40:28.720914 ignition[602]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:40:28.727339 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:40:28.722843 ignition[602]: parsed url from cmdline: "" Mar 17 18:40:28.730782 systemd[1]: Starting ignition-fetch.service... Mar 17 18:40:28.722853 ignition[602]: no config URL provided Mar 17 18:40:28.722869 ignition[602]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:40:28.722892 ignition[602]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:40:28.722905 ignition[602]: failed to fetch config: resource requires networking Mar 17 18:40:28.723571 ignition[602]: Ignition finished successfully Mar 17 18:40:28.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:28.750336 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:40:28.751566 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:40:28.752604 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:40:28.753833 systemd[1]: Reached target remote-fs.target. Mar 17 18:40:28.757010 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:40:28.779396 ignition[697]: Ignition 2.14.0 Mar 17 18:40:28.779418 ignition[697]: Stage: fetch Mar 17 18:40:28.779688 ignition[697]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:40:28.779723 ignition[697]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:40:28.783695 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:40:28.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:28.785142 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:40:28.785410 ignition[697]: parsed url from cmdline: "" Mar 17 18:40:28.785419 ignition[697]: no config URL provided Mar 17 18:40:28.785429 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:40:28.785447 ignition[697]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:40:28.785522 ignition[697]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Mar 17 18:40:28.834285 ignition[697]: GET result: OK Mar 17 18:40:28.834564 ignition[697]: parsing config with SHA512: 6c6528890dc2ce0b18648d78a91356c3f7036e709249e485097cb8d38428e8ede5ec0496bb0347ee09baf17bc2bd7924ab185cf62d75140b4daa7b55ff7e87b0 Mar 17 18:40:28.860491 unknown[697]: fetched base config from "system" Mar 17 18:40:28.861512 unknown[697]: fetched base config from "system" Mar 17 18:40:28.862483 unknown[697]: fetched user config from "digitalocean" Mar 17 18:40:28.864197 ignition[697]: fetch: fetch complete Mar 17 18:40:28.864776 ignition[697]: fetch: fetch passed Mar 17 18:40:28.864955 ignition[697]: Ignition finished successfully Mar 17 18:40:28.867284 systemd[1]: Finished ignition-fetch.service. Mar 17 18:40:28.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:28.876356 systemd[1]: Starting ignition-kargs.service... Mar 17 18:40:28.922023 ignition[710]: Ignition 2.14.0 Mar 17 18:40:28.923095 ignition[710]: Stage: kargs Mar 17 18:40:28.923832 ignition[710]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:40:28.924681 ignition[710]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:40:28.929058 ignition[710]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:40:28.935426 ignition[710]: kargs: kargs passed Mar 17 18:40:28.935658 ignition[710]: Ignition finished successfully Mar 17 18:40:28.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:28.942754 systemd[1]: Finished ignition-kargs.service. Mar 17 18:40:28.948708 systemd[1]: Starting ignition-disks.service... Mar 17 18:40:28.968718 ignition[716]: Ignition 2.14.0 Mar 17 18:40:28.968736 ignition[716]: Stage: disks Mar 17 18:40:28.969254 ignition[716]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:40:28.969295 ignition[716]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:40:28.972543 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:40:28.976469 ignition[716]: disks: disks passed Mar 17 18:40:28.976593 ignition[716]: Ignition finished successfully Mar 17 18:40:28.980033 systemd[1]: Finished ignition-disks.service. Mar 17 18:40:28.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:28.980904 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:40:28.982687 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:40:28.983086 systemd[1]: Reached target local-fs.target. Mar 17 18:40:28.983530 systemd[1]: Reached target sysinit.target. Mar 17 18:40:28.983910 systemd[1]: Reached target basic.target. Mar 17 18:40:28.989343 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:40:29.054506 systemd-fsck[724]: ROOT: clean, 623/553520 files, 56022/553472 blocks Mar 17 18:40:29.062057 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:40:29.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:29.068453 systemd[1]: Mounting sysroot.mount... Mar 17 18:40:29.104232 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:40:29.106081 systemd[1]: Mounted sysroot.mount. Mar 17 18:40:29.108039 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:40:29.114058 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:40:29.118226 systemd[1]: Starting flatcar-digitalocean-network.service... Mar 17 18:40:29.122979 systemd[1]: Starting flatcar-metadata-hostname.service... Mar 17 18:40:29.124892 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:40:29.126320 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:40:29.135538 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:40:29.164980 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:40:29.171965 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:40:29.211930 initrd-setup-root[737]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:40:29.232224 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (732) Mar 17 18:40:29.248493 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:40:29.248620 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:40:29.248644 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:40:29.250442 initrd-setup-root[745]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:40:29.293431 initrd-setup-root[773]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:40:29.326698 initrd-setup-root[783]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:40:29.329664 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:40:29.421915 coreos-metadata[731]: Mar 17 18:40:29.421 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 18:40:29.459594 coreos-metadata[731]: Mar 17 18:40:29.457 INFO Fetch successful Mar 17 18:40:29.461166 coreos-metadata[731]: Mar 17 18:40:29.461 INFO wrote hostname ci-3510.3.7-d-b51ee9817d to /sysroot/etc/hostname Mar 17 18:40:29.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:29.462804 systemd[1]: Finished flatcar-metadata-hostname.service. Mar 17 18:40:29.472339 coreos-metadata[730]: Mar 17 18:40:29.472 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 18:40:29.498904 coreos-metadata[730]: Mar 17 18:40:29.491 INFO Fetch successful Mar 17 18:40:29.504348 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Mar 17 18:40:29.504513 systemd[1]: Finished flatcar-digitalocean-network.service. Mar 17 18:40:29.506315 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:40:29.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:29.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:29.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:29.508692 systemd[1]: Starting ignition-mount.service... Mar 17 18:40:29.513237 systemd[1]: Starting sysroot-boot.service... Mar 17 18:40:29.536268 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Mar 17 18:40:29.536487 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Mar 17 18:40:29.558877 ignition[802]: INFO : Ignition 2.14.0 Mar 17 18:40:29.560355 ignition[802]: INFO : Stage: mount Mar 17 18:40:29.561490 ignition[802]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:40:29.562512 ignition[802]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:40:29.568089 ignition[802]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:40:29.571617 ignition[802]: INFO : mount: mount passed Mar 17 18:40:29.572597 ignition[802]: INFO : Ignition finished successfully Mar 17 18:40:29.581836 systemd[1]: Finished ignition-mount.service. Mar 17 18:40:29.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:29.587594 systemd[1]: Starting ignition-files.service... Mar 17 18:40:29.611242 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:40:29.617945 systemd[1]: Finished sysroot-boot.service. Mar 17 18:40:29.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:29.658779 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (812) Mar 17 18:40:29.664889 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:40:29.665354 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:40:29.665506 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:40:29.689964 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:40:29.737911 ignition[831]: INFO : Ignition 2.14.0 Mar 17 18:40:29.740554 ignition[831]: INFO : Stage: files Mar 17 18:40:29.741814 ignition[831]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:40:29.741814 ignition[831]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:40:29.746766 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:40:29.759250 ignition[831]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:40:29.765177 ignition[831]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:40:29.765177 ignition[831]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:40:29.793015 ignition[831]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:40:29.799797 ignition[831]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:40:29.804173 unknown[831]: wrote ssh authorized keys file for user: core Mar 17 18:40:29.806728 ignition[831]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:40:29.806728 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:40:29.810007 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 18:40:29.867357 systemd-networkd[685]: eth0: Gained IPv6LL Mar 17 18:40:29.880661 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 18:40:30.085743 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:40:30.087361 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:40:30.087361 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 18:40:30.135845 systemd-networkd[685]: eth1: Gained IPv6LL Mar 17 18:40:30.621336 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:40:30.800679 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:40:30.802085 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:40:30.809999 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:40:30.814029 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:40:30.814029 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:40:30.814029 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:40:30.814029 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:40:30.814029 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:40:30.814029 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:40:30.820934 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:40:30.820934 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:40:30.820934 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 18:40:30.820934 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 18:40:30.820934 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 18:40:30.820934 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Mar 17 18:40:31.404459 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 18:40:32.345726 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 18:40:32.347644 ignition[831]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Mar 17 18:40:32.349021 ignition[831]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Mar 17 18:40:32.349021 ignition[831]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Mar 17 18:40:32.350796 ignition[831]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:40:32.350796 ignition[831]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:40:32.350796 ignition[831]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Mar 17 18:40:32.350796 ignition[831]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 18:40:32.350796 ignition[831]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 18:40:32.350796 ignition[831]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:40:32.350796 ignition[831]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:40:32.360316 ignition[831]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:40:32.361599 ignition[831]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:40:32.361599 ignition[831]: INFO : files: files passed Mar 17 18:40:32.361599 ignition[831]: INFO : Ignition finished successfully Mar 17 18:40:32.366775 systemd[1]: Finished ignition-files.service. Mar 17 18:40:32.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.370878 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:40:32.371570 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:40:32.373218 systemd[1]: Starting ignition-quench.service... Mar 17 18:40:32.382577 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:40:32.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.385238 systemd[1]: Finished ignition-quench.service. Mar 17 18:40:32.391098 initrd-setup-root-after-ignition[856]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:40:32.394137 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:40:32.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.395080 systemd[1]: Reached target ignition-complete.target. Mar 17 18:40:32.399162 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:40:32.445982 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:40:32.446996 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:40:32.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.448487 systemd[1]: Reached target initrd-fs.target. Mar 17 18:40:32.449546 systemd[1]: Reached target initrd.target. Mar 17 18:40:32.450681 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:40:32.453636 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:40:32.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.510130 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:40:32.514162 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:40:32.545716 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:40:32.547427 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:40:32.548937 systemd[1]: Stopped target timers.target. Mar 17 18:40:32.550350 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:40:32.551402 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:40:32.553024 systemd[1]: Stopped target initrd.target. Mar 17 18:40:32.554368 systemd[1]: Stopped target basic.target. Mar 17 18:40:32.555673 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:40:32.556981 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:40:32.558410 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:40:32.559777 systemd[1]: Stopped target remote-fs.target. Mar 17 18:40:32.561161 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:40:32.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.577183 systemd[1]: Stopped target sysinit.target. Mar 17 18:40:32.579248 systemd[1]: Stopped target local-fs.target. Mar 17 18:40:32.581043 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:40:32.582488 systemd[1]: Stopped target swap.target. Mar 17 18:40:32.583820 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:40:32.584864 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:40:32.586436 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:40:32.587757 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:40:32.588765 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:40:32.590360 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:40:32.591495 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:40:32.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.606928 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:40:32.608181 systemd[1]: Stopped ignition-files.service. Mar 17 18:40:32.609727 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 18:40:32.611100 systemd[1]: Stopped flatcar-metadata-hostname.service. Mar 17 18:40:32.614456 systemd[1]: Stopping ignition-mount.service... Mar 17 18:40:32.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.625108 systemd[1]: Stopping iscsiuio.service... Mar 17 18:40:32.633784 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:40:32.635242 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:40:32.636428 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:40:32.638031 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:40:32.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.644528 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:40:32.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.649532 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:40:32.649745 systemd[1]: Stopped iscsiuio.service. Mar 17 18:40:32.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.653334 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:40:32.653495 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:40:32.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.663771 ignition[869]: INFO : Ignition 2.14.0 Mar 17 18:40:32.665160 ignition[869]: INFO : Stage: umount Mar 17 18:40:32.666303 ignition[869]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:40:32.667327 ignition[869]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:40:32.671923 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:40:32.676975 ignition[869]: INFO : umount: umount passed Mar 17 18:40:32.678523 ignition[869]: INFO : Ignition finished successfully Mar 17 18:40:32.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.681834 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:40:32.682021 systemd[1]: Stopped ignition-mount.service. Mar 17 18:40:32.682973 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:40:32.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.683067 systemd[1]: Stopped ignition-disks.service. Mar 17 18:40:32.683829 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:40:32.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.683913 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:40:32.684555 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 18:40:32.684628 systemd[1]: Stopped ignition-fetch.service. Mar 17 18:40:32.690693 systemd[1]: Stopped target network.target. Mar 17 18:40:32.691485 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:40:32.691623 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:40:32.692828 systemd[1]: Stopped target paths.target. Mar 17 18:40:32.693511 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:40:32.695281 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:40:32.697485 systemd[1]: Stopped target slices.target. Mar 17 18:40:32.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.698371 systemd[1]: Stopped target sockets.target. Mar 17 18:40:32.706610 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:40:32.707058 systemd[1]: Closed iscsid.socket. Mar 17 18:40:32.708711 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:40:32.708998 systemd[1]: Closed iscsiuio.socket. Mar 17 18:40:32.711134 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:40:32.711271 systemd[1]: Stopped ignition-setup.service. Mar 17 18:40:32.713250 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:40:32.718602 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:40:32.721254 systemd-networkd[685]: eth0: DHCPv6 lease lost Mar 17 18:40:32.724340 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:40:32.725903 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:40:32.726062 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:40:32.726260 systemd-networkd[685]: eth1: DHCPv6 lease lost Mar 17 18:40:32.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.733901 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:40:32.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.734422 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:40:32.743992 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:40:32.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.744713 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:40:32.747000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:40:32.747000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:40:32.747567 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:40:32.747632 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:40:32.754612 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:40:32.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.754740 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:40:32.757589 systemd[1]: Stopping network-cleanup.service... Mar 17 18:40:32.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.758161 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:40:32.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.758311 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:40:32.769358 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:40:32.769489 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:40:32.770885 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:40:32.770994 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:40:32.795792 kernel: kauditd_printk_skb: 45 callbacks suppressed Mar 17 18:40:32.795843 kernel: audit: type=1131 audit(1742236832.791:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.777671 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:40:32.780683 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:40:32.802259 kernel: audit: type=1131 audit(1742236832.797:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.790375 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:40:32.790608 systemd[1]: Stopped network-cleanup.service. Mar 17 18:40:32.796328 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:40:32.812489 kernel: audit: type=1131 audit(1742236832.806:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.796623 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:40:32.829248 kernel: audit: type=1131 audit(1742236832.824:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.797773 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:40:32.847888 kernel: audit: type=1131 audit(1742236832.829:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.847944 kernel: audit: type=1131 audit(1742236832.842:77): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.847964 kernel: audit: type=1131 audit(1742236832.847:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.797839 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:40:32.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.802845 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:40:32.860325 kernel: audit: type=1131 audit(1742236832.852:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.802910 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:40:32.872697 kernel: audit: type=1130 audit(1742236832.860:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.872766 kernel: audit: type=1131 audit(1742236832.860:81): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:32.804560 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:40:32.804698 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:40:32.806650 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:40:32.806772 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:40:32.824752 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:40:32.824876 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:40:32.833526 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:40:32.841321 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 18:40:32.841507 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Mar 17 18:40:32.846609 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:40:32.846757 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:40:32.847583 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:40:32.847673 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:40:32.858954 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 18:40:32.859905 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:40:32.860119 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:40:32.860893 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:40:32.874355 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:40:32.898607 systemd[1]: Switching root. Mar 17 18:40:32.938201 iscsid[690]: iscsid shutting down. Mar 17 18:40:32.939079 systemd-journald[183]: Journal stopped Mar 17 18:40:40.343155 systemd-journald[183]: Received SIGTERM from PID 1 (n/a). Mar 17 18:40:40.343297 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:40:40.343363 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:40:40.343398 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:40:40.343418 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:40:40.345357 kernel: SELinux: policy capability open_perms=1 Mar 17 18:40:40.345382 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:40:40.345402 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:40:40.345419 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:40:40.345435 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:40:40.345464 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:40:40.345492 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:40:40.345514 systemd[1]: Successfully loaded SELinux policy in 87.784ms. Mar 17 18:40:40.345554 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.862ms. Mar 17 18:40:40.345584 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:40:40.345608 systemd[1]: Detected virtualization kvm. Mar 17 18:40:40.345631 systemd[1]: Detected architecture x86-64. Mar 17 18:40:40.345656 systemd[1]: Detected first boot. Mar 17 18:40:40.345690 systemd[1]: Hostname set to . Mar 17 18:40:40.345715 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:40:40.345738 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:40:40.345762 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:40:40.345787 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:40:40.345813 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:40:40.345841 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:40:40.345871 kernel: kauditd_printk_skb: 16 callbacks suppressed Mar 17 18:40:40.345894 kernel: audit: type=1334 audit(1742236839.968:91): prog-id=12 op=LOAD Mar 17 18:40:40.345917 kernel: audit: type=1334 audit(1742236839.971:92): prog-id=3 op=UNLOAD Mar 17 18:40:40.345938 kernel: audit: type=1334 audit(1742236839.977:93): prog-id=13 op=LOAD Mar 17 18:40:40.345958 kernel: audit: type=1334 audit(1742236839.978:94): prog-id=14 op=LOAD Mar 17 18:40:40.345976 kernel: audit: type=1334 audit(1742236839.978:95): prog-id=4 op=UNLOAD Mar 17 18:40:40.345994 kernel: audit: type=1334 audit(1742236839.978:96): prog-id=5 op=UNLOAD Mar 17 18:40:40.346035 kernel: audit: type=1334 audit(1742236839.982:97): prog-id=15 op=LOAD Mar 17 18:40:40.346056 kernel: audit: type=1334 audit(1742236839.982:98): prog-id=12 op=UNLOAD Mar 17 18:40:40.346078 kernel: audit: type=1334 audit(1742236839.983:99): prog-id=16 op=LOAD Mar 17 18:40:40.346147 kernel: audit: type=1334 audit(1742236839.984:100): prog-id=17 op=LOAD Mar 17 18:40:40.346183 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:40:40.346209 systemd[1]: Stopped iscsid.service. Mar 17 18:40:40.346239 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 18:40:40.346262 systemd[1]: Stopped initrd-switch-root.service. Mar 17 18:40:40.346286 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 18:40:40.346315 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:40:40.346370 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:40:40.346391 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Mar 17 18:40:40.346412 systemd[1]: Created slice system-getty.slice. Mar 17 18:40:40.346432 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:40:40.347061 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:40:40.347885 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:40:40.347971 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:40:40.348013 systemd[1]: Created slice user.slice. Mar 17 18:40:40.348038 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:40:40.348059 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:40:40.348078 systemd[1]: Set up automount boot.automount. Mar 17 18:40:40.348096 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:40:40.350197 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 18:40:40.350267 systemd[1]: Stopped target initrd-fs.target. Mar 17 18:40:40.350294 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 18:40:40.350319 systemd[1]: Reached target integritysetup.target. Mar 17 18:40:40.350354 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:40:40.350380 systemd[1]: Reached target remote-fs.target. Mar 17 18:40:40.350404 systemd[1]: Reached target slices.target. Mar 17 18:40:40.350429 systemd[1]: Reached target swap.target. Mar 17 18:40:40.350455 systemd[1]: Reached target torcx.target. Mar 17 18:40:40.350479 systemd[1]: Reached target veritysetup.target. Mar 17 18:40:40.350504 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:40:40.350528 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:40:40.350553 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:40:40.350578 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:40:40.350606 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:40:40.350630 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:40:40.350655 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:40:40.350680 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:40:40.350705 systemd[1]: Mounting media.mount... Mar 17 18:40:40.350730 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:40:40.350754 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:40:40.350779 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:40:40.350804 systemd[1]: Mounting tmp.mount... Mar 17 18:40:40.350832 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:40:40.350857 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:40:40.350881 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:40:40.350905 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:40:40.350929 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:40:40.350954 systemd[1]: Starting modprobe@drm.service... Mar 17 18:40:40.350978 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:40:40.351000 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:40:40.351023 systemd[1]: Starting modprobe@loop.service... Mar 17 18:40:40.351052 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:40:40.351070 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 18:40:40.351089 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 18:40:40.351188 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 18:40:40.356398 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 18:40:40.356503 systemd[1]: Stopped systemd-journald.service. Mar 17 18:40:40.356540 systemd[1]: Starting systemd-journald.service... Mar 17 18:40:40.356566 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:40:40.356592 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:40:40.356630 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:40:40.356654 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:40:40.356679 kernel: loop: module loaded Mar 17 18:40:40.356707 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 18:40:40.356733 systemd[1]: Stopped verity-setup.service. Mar 17 18:40:40.356769 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:40:40.356801 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:40:40.356828 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:40:40.356853 systemd[1]: Mounted media.mount. Mar 17 18:40:40.356878 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:40:40.356903 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:40:40.356927 systemd[1]: Mounted tmp.mount. Mar 17 18:40:40.356951 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:40:40.356975 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:40:40.357000 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:40:40.357030 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:40:40.357054 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:40:40.357078 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:40:40.357104 systemd[1]: Finished modprobe@drm.service. Mar 17 18:40:40.361347 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:40:40.361409 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:40:40.361436 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:40:40.361461 systemd[1]: Finished modprobe@loop.service. Mar 17 18:40:40.361487 kernel: fuse: init (API version 7.34) Mar 17 18:40:40.361520 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:40:40.361546 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:40:40.361573 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:40:40.361599 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:40:40.361624 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:40:40.361666 systemd-journald[975]: Journal started Mar 17 18:40:40.361781 systemd-journald[975]: Runtime Journal (/run/log/journal/53e68137e32240fca4eea17f189b5600) is 4.9M, max 39.5M, 34.5M free. Mar 17 18:40:33.284000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:40:33.402000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:40:33.402000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:40:33.403000 audit: BPF prog-id=10 op=LOAD Mar 17 18:40:40.365178 systemd[1]: Started systemd-journald.service. Mar 17 18:40:33.403000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:40:33.403000 audit: BPF prog-id=11 op=LOAD Mar 17 18:40:33.403000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:40:33.637000 audit[902]: AVC avc: denied { associate } for pid=902 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:40:33.637000 audit[902]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d89c a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=885 pid=902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:40:33.637000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:40:33.638000 audit[902]: AVC avc: denied { associate } for pid=902 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:40:33.638000 audit[902]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d975 a2=1ed a3=0 items=2 ppid=885 pid=902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:40:33.638000 audit: CWD cwd="/" Mar 17 18:40:33.638000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:33.638000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:33.638000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:40:39.968000 audit: BPF prog-id=12 op=LOAD Mar 17 18:40:39.971000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:40:39.977000 audit: BPF prog-id=13 op=LOAD Mar 17 18:40:39.978000 audit: BPF prog-id=14 op=LOAD Mar 17 18:40:39.978000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:40:39.978000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:40:39.982000 audit: BPF prog-id=15 op=LOAD Mar 17 18:40:39.982000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:40:39.983000 audit: BPF prog-id=16 op=LOAD Mar 17 18:40:39.984000 audit: BPF prog-id=17 op=LOAD Mar 17 18:40:39.984000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:40:39.984000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:40:39.986000 audit: BPF prog-id=18 op=LOAD Mar 17 18:40:39.986000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:40:39.987000 audit: BPF prog-id=19 op=LOAD Mar 17 18:40:39.987000 audit: BPF prog-id=20 op=LOAD Mar 17 18:40:39.987000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:40:39.987000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:40:39.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:39.991000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:40:39.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:39.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.220000 audit: BPF prog-id=21 op=LOAD Mar 17 18:40:40.221000 audit: BPF prog-id=22 op=LOAD Mar 17 18:40:40.221000 audit: BPF prog-id=23 op=LOAD Mar 17 18:40:40.221000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:40:40.221000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:40:40.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.337000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:40:40.337000 audit[975]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffea8e89b50 a2=4000 a3=7ffea8e89bec items=0 ppid=1 pid=975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:40:40.337000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:40:40.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:39.965578 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:40:33.626613 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:40:39.965604 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 17 18:40:33.632672 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:40:39.987914 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 18:40:33.632918 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:40:40.367662 systemd[1]: Reached target network-pre.target. Mar 17 18:40:33.632996 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:33Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 18:40:40.373016 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:40:33.633015 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:33Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 18:40:40.379068 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:40:33.633152 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:33Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 18:40:40.386066 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:40:33.633181 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:33Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 18:40:33.633616 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:33Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 18:40:33.633721 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:40:33.633748 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:40:33.636182 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 18:40:33.636262 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 18:40:33.636291 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 18:40:33.636309 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 18:40:33.636333 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 18:40:33.636357 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 18:40:39.100852 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:39Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:40:40.397707 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:40:39.101419 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:39Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:40:40.401281 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:40:39.101652 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:39Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:40:40.402411 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:40:39.102088 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:39Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:40:39.102224 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:39Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 18:40:40.407480 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:40:39.102393 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T18:40:39Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 18:40:40.408314 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:40:40.411422 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:40:40.428232 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:40:40.429254 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:40:40.447712 systemd-journald[975]: Time spent on flushing to /var/log/journal/53e68137e32240fca4eea17f189b5600 is 114.155ms for 1187 entries. Mar 17 18:40:40.447712 systemd-journald[975]: System Journal (/var/log/journal/53e68137e32240fca4eea17f189b5600) is 8.0M, max 195.6M, 187.6M free. Mar 17 18:40:40.570950 systemd-journald[975]: Received client request to flush runtime journal. Mar 17 18:40:40.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:40.461032 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:40:40.464379 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:40:40.474419 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:40:40.476629 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:40:40.499003 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:40:40.555961 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:40:40.559509 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:40:40.566931 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:40:40.570048 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:40:40.572803 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:40:40.608492 udevadm[1010]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 18:40:40.688664 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:40:40.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:41.811470 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:40:41.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:41.813000 audit: BPF prog-id=24 op=LOAD Mar 17 18:40:41.813000 audit: BPF prog-id=25 op=LOAD Mar 17 18:40:41.813000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:40:41.813000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:40:41.815396 systemd[1]: Starting systemd-udevd.service... Mar 17 18:40:41.852555 systemd-udevd[1014]: Using default interface naming scheme 'v252'. Mar 17 18:40:41.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:41.908023 systemd[1]: Started systemd-udevd.service. Mar 17 18:40:41.921000 audit: BPF prog-id=26 op=LOAD Mar 17 18:40:41.926025 systemd[1]: Starting systemd-networkd.service... Mar 17 18:40:41.943000 audit: BPF prog-id=27 op=LOAD Mar 17 18:40:41.944000 audit: BPF prog-id=28 op=LOAD Mar 17 18:40:41.944000 audit: BPF prog-id=29 op=LOAD Mar 17 18:40:41.945930 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:40:42.026029 systemd[1]: Started systemd-userdbd.service. Mar 17 18:40:42.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:42.072709 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:40:42.073103 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:40:42.077133 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:40:42.084693 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:40:42.091483 systemd[1]: Starting modprobe@loop.service... Mar 17 18:40:42.092138 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:40:42.092243 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:40:42.092413 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:40:42.093294 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:40:42.093548 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:40:42.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:42.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:42.094617 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:40:42.094833 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:40:42.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:42.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:42.097427 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:40:42.103751 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:40:42.103996 systemd[1]: Finished modprobe@loop.service. Mar 17 18:40:42.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:42.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:42.105676 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:40:42.209876 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Mar 17 18:40:42.214715 systemd-networkd[1015]: lo: Link UP Mar 17 18:40:42.218249 systemd-networkd[1015]: lo: Gained carrier Mar 17 18:40:42.219551 systemd-networkd[1015]: Enumeration completed Mar 17 18:40:42.219952 systemd-networkd[1015]: eth1: Configuring with /run/systemd/network/10-ca:4d:64:b9:58:e7.network. Mar 17 18:40:42.219964 systemd[1]: Started systemd-networkd.service. Mar 17 18:40:42.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:42.222409 systemd-networkd[1015]: eth0: Configuring with /run/systemd/network/10-6a:94:98:f8:88:71.network. Mar 17 18:40:42.226805 systemd-networkd[1015]: eth1: Link UP Mar 17 18:40:42.227380 systemd-networkd[1015]: eth1: Gained carrier Mar 17 18:40:42.231359 systemd-networkd[1015]: eth0: Link UP Mar 17 18:40:42.231608 systemd-networkd[1015]: eth0: Gained carrier Mar 17 18:40:42.276155 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 18:40:42.297157 kernel: ACPI: button: Power Button [PWRF] Mar 17 18:40:42.302000 audit[1016]: AVC avc: denied { confidentiality } for pid=1016 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 18:40:42.302000 audit[1016]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b80a7a55e0 a1=338ac a2=7f75a2ffabc5 a3=5 items=110 ppid=1014 pid=1016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:40:42.302000 audit: CWD cwd="/" Mar 17 18:40:42.302000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=1 name=(null) inode=14072 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=2 name=(null) inode=14072 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=3 name=(null) inode=14073 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=4 name=(null) inode=14072 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=5 name=(null) inode=14074 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=6 name=(null) inode=14072 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=7 name=(null) inode=14075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=8 name=(null) inode=14075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=9 name=(null) inode=14076 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=10 name=(null) inode=14075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=11 name=(null) inode=14077 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=12 name=(null) inode=14075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=13 name=(null) inode=14078 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=14 name=(null) inode=14075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=15 name=(null) inode=14079 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=16 name=(null) inode=14075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=17 name=(null) inode=14080 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=18 name=(null) inode=14072 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=19 name=(null) inode=14081 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=20 name=(null) inode=14081 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=21 name=(null) inode=14082 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=22 name=(null) inode=14081 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=23 name=(null) inode=14083 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=24 name=(null) inode=14081 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=25 name=(null) inode=14084 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=26 name=(null) inode=14081 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=27 name=(null) inode=14085 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=28 name=(null) inode=14081 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=29 name=(null) inode=14086 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=30 name=(null) inode=14072 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=31 name=(null) inode=14087 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=32 name=(null) inode=14087 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=33 name=(null) inode=14088 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=34 name=(null) inode=14087 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=35 name=(null) inode=14089 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=36 name=(null) inode=14087 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=37 name=(null) inode=14090 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=38 name=(null) inode=14087 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=39 name=(null) inode=14091 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=40 name=(null) inode=14087 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=41 name=(null) inode=14092 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=42 name=(null) inode=14072 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=43 name=(null) inode=14093 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=44 name=(null) inode=14093 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=45 name=(null) inode=14094 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=46 name=(null) inode=14093 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=47 name=(null) inode=14095 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=48 name=(null) inode=14093 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=49 name=(null) inode=14096 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=50 name=(null) inode=14093 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=51 name=(null) inode=14097 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=52 name=(null) inode=14093 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=53 name=(null) inode=14098 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=55 name=(null) inode=14099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=56 name=(null) inode=14099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=57 name=(null) inode=14100 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=58 name=(null) inode=14099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=59 name=(null) inode=14101 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=60 name=(null) inode=14099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=61 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=62 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=63 name=(null) inode=14103 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=64 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=65 name=(null) inode=14104 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=66 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=67 name=(null) inode=14105 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=68 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=69 name=(null) inode=14106 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=70 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=71 name=(null) inode=14107 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=72 name=(null) inode=14099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=73 name=(null) inode=14108 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=74 name=(null) inode=14108 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=75 name=(null) inode=14109 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=76 name=(null) inode=14108 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=77 name=(null) inode=14110 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=78 name=(null) inode=14108 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=79 name=(null) inode=14111 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=80 name=(null) inode=14108 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=81 name=(null) inode=14112 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=82 name=(null) inode=14108 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=83 name=(null) inode=14113 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=84 name=(null) inode=14099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=85 name=(null) inode=14114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=86 name=(null) inode=14114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=87 name=(null) inode=14115 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=88 name=(null) inode=14114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=89 name=(null) inode=14116 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=90 name=(null) inode=14114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=91 name=(null) inode=14117 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=92 name=(null) inode=14114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=93 name=(null) inode=14118 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=94 name=(null) inode=14114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=95 name=(null) inode=14119 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=96 name=(null) inode=14099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=97 name=(null) inode=14120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=98 name=(null) inode=14120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=99 name=(null) inode=14121 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=100 name=(null) inode=14120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=101 name=(null) inode=14122 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=102 name=(null) inode=14120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=103 name=(null) inode=14123 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=104 name=(null) inode=14120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=105 name=(null) inode=14124 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=106 name=(null) inode=14120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=107 name=(null) inode=14125 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PATH item=109 name=(null) inode=14126 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:40:42.302000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 18:40:42.379172 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Mar 17 18:40:42.404239 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 18:40:42.407541 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:40:42.435157 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:40:42.571189 kernel: EDAC MC: Ver: 3.0.0 Mar 17 18:40:42.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:42.606435 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:40:42.610556 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:40:42.658636 lvm[1052]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:40:42.706756 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:40:42.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:42.708018 systemd[1]: Reached target cryptsetup.target. Mar 17 18:40:42.710870 systemd[1]: Starting lvm2-activation.service... Mar 17 18:40:42.729888 lvm[1053]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:40:42.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:42.797282 systemd[1]: Finished lvm2-activation.service. Mar 17 18:40:42.798183 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:40:42.801174 systemd[1]: Mounting media-configdrive.mount... Mar 17 18:40:42.801796 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:40:42.801913 systemd[1]: Reached target machines.target. Mar 17 18:40:42.804634 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:40:42.838407 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:40:42.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:42.845234 kernel: ISO 9660 Extensions: RRIP_1991A Mar 17 18:40:42.848714 systemd[1]: Mounted media-configdrive.mount. Mar 17 18:40:42.849677 systemd[1]: Reached target local-fs.target. Mar 17 18:40:42.854060 systemd[1]: Starting ldconfig.service... Mar 17 18:40:42.857797 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:40:42.857904 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:40:42.866454 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:40:42.879290 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:40:42.884992 systemd[1]: Starting systemd-sysext.service... Mar 17 18:40:42.892068 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1059 (bootctl) Mar 17 18:40:42.894650 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:40:42.920741 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:40:42.953458 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:40:42.957890 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:40:42.966071 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:40:42.967739 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:40:42.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:42.988646 kernel: loop0: detected capacity change from 0 to 205544 Mar 17 18:40:43.031613 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:40:43.095221 kernel: loop1: detected capacity change from 0 to 205544 Mar 17 18:40:43.141172 (sd-sysext)[1070]: Using extensions 'kubernetes'. Mar 17 18:40:43.143387 (sd-sysext)[1070]: Merged extensions into '/usr'. Mar 17 18:40:43.179834 systemd-fsck[1066]: fsck.fat 4.2 (2021-01-31) Mar 17 18:40:43.179834 systemd-fsck[1066]: /dev/vda1: 789 files, 119299/258078 clusters Mar 17 18:40:43.186925 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:40:43.195382 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:40:43.196642 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:40:43.204956 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:40:43.209733 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:40:43.215754 systemd[1]: Starting modprobe@loop.service... Mar 17 18:40:43.219585 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:40:43.219861 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:40:43.220057 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:40:43.222038 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:40:43.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.228761 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:40:43.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.231224 systemd[1]: Finished systemd-sysext.service. Mar 17 18:40:43.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.232161 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:40:43.232359 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:40:43.233323 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:40:43.233502 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:40:43.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.249484 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:40:43.249769 systemd[1]: Finished modprobe@loop.service. Mar 17 18:40:43.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.256764 systemd[1]: Mounting boot.mount... Mar 17 18:40:43.265541 systemd[1]: Starting ensure-sysext.service... Mar 17 18:40:43.266242 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:40:43.266454 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:40:43.273262 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:40:43.281214 systemd[1]: Reloading. Mar 17 18:40:43.307375 systemd-networkd[1015]: eth0: Gained IPv6LL Mar 17 18:40:43.322906 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:40:43.329858 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:40:43.335976 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:40:43.446018 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-03-17T18:40:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:40:43.446058 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-03-17T18:40:43Z" level=info msg="torcx already run" Mar 17 18:40:43.608569 ldconfig[1058]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:40:43.680599 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:40:43.680632 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:40:43.717064 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:40:43.829000 audit: BPF prog-id=30 op=LOAD Mar 17 18:40:43.829000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:40:43.829000 audit: BPF prog-id=31 op=LOAD Mar 17 18:40:43.829000 audit: BPF prog-id=32 op=LOAD Mar 17 18:40:43.829000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:40:43.829000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:40:43.831000 audit: BPF prog-id=33 op=LOAD Mar 17 18:40:43.831000 audit: BPF prog-id=26 op=UNLOAD Mar 17 18:40:43.837000 audit: BPF prog-id=34 op=LOAD Mar 17 18:40:43.837000 audit: BPF prog-id=35 op=LOAD Mar 17 18:40:43.837000 audit: BPF prog-id=24 op=UNLOAD Mar 17 18:40:43.837000 audit: BPF prog-id=25 op=UNLOAD Mar 17 18:40:43.839000 audit: BPF prog-id=36 op=LOAD Mar 17 18:40:43.839000 audit: BPF prog-id=27 op=UNLOAD Mar 17 18:40:43.839000 audit: BPF prog-id=37 op=LOAD Mar 17 18:40:43.839000 audit: BPF prog-id=38 op=LOAD Mar 17 18:40:43.839000 audit: BPF prog-id=28 op=UNLOAD Mar 17 18:40:43.839000 audit: BPF prog-id=29 op=UNLOAD Mar 17 18:40:43.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.849409 systemd[1]: Finished ldconfig.service. Mar 17 18:40:43.850300 systemd[1]: Mounted boot.mount. Mar 17 18:40:43.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.886198 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:40:43.887381 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:40:43.887840 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:40:43.893048 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:40:43.897700 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:40:43.901899 systemd[1]: Starting modprobe@loop.service... Mar 17 18:40:43.902600 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:40:43.902852 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:40:43.903050 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:40:43.904449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:40:43.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.905091 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:40:43.906202 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:40:43.907393 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:40:43.907660 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:40:43.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.911285 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:40:43.911779 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:40:43.914271 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:40:43.919582 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:40:43.920272 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:40:43.920631 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:40:43.920893 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:40:43.922418 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:40:43.922711 systemd[1]: Finished modprobe@loop.service. Mar 17 18:40:43.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.927620 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:40:43.928387 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:40:43.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.929700 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:40:43.929923 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:40:43.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.931287 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:40:43.931717 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:40:43.934336 systemd[1]: Starting modprobe@drm.service... Mar 17 18:40:43.939264 systemd[1]: Starting modprobe@loop.service... Mar 17 18:40:43.939964 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:40:43.942327 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:40:43.944962 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:40:43.945820 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:40:43.946102 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:40:43.947432 systemd-networkd[1015]: eth1: Gained IPv6LL Mar 17 18:40:43.949024 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:40:43.949371 systemd[1]: Finished modprobe@drm.service. Mar 17 18:40:43.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.950713 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:40:43.950870 systemd[1]: Finished modprobe@loop.service. Mar 17 18:40:43.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.952030 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:40:43.953808 systemd[1]: Finished ensure-sysext.service. Mar 17 18:40:43.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:43.967829 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:40:43.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:44.035371 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:40:44.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:44.038599 systemd[1]: Starting audit-rules.service... Mar 17 18:40:44.041745 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:40:44.055000 audit: BPF prog-id=39 op=LOAD Mar 17 18:40:44.053587 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:40:44.057190 systemd[1]: Starting systemd-resolved.service... Mar 17 18:40:44.059000 audit: BPF prog-id=40 op=LOAD Mar 17 18:40:44.063471 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:40:44.066574 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:40:44.088245 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:40:44.089050 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:40:44.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:44.090000 audit[1162]: SYSTEM_BOOT pid=1162 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:40:44.095215 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:40:44.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:44.135265 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:40:44.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:44.138822 systemd[1]: Starting systemd-update-done.service... Mar 17 18:40:44.160710 systemd[1]: Finished systemd-update-done.service. Mar 17 18:40:44.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:40:44.189000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:40:44.189000 audit[1175]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc49bdd920 a2=420 a3=0 items=0 ppid=1154 pid=1175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:40:44.189000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:40:44.190353 augenrules[1175]: No rules Mar 17 18:40:44.190636 systemd[1]: Finished audit-rules.service. Mar 17 18:40:44.210193 systemd-resolved[1158]: Positive Trust Anchors: Mar 17 18:40:44.210214 systemd-resolved[1158]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:40:44.210264 systemd-resolved[1158]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:40:44.217670 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:40:44.218409 systemd[1]: Reached target time-set.target. Mar 17 18:40:44.222701 systemd-resolved[1158]: Using system hostname 'ci-3510.3.7-d-b51ee9817d'. Mar 17 18:40:44.226361 systemd[1]: Started systemd-resolved.service. Mar 17 18:40:44.227078 systemd[1]: Reached target network.target. Mar 17 18:40:44.227556 systemd[1]: Reached target network-online.target. Mar 17 18:40:44.228012 systemd[1]: Reached target nss-lookup.target. Mar 17 18:40:44.228574 systemd[1]: Reached target sysinit.target. Mar 17 18:40:44.229222 systemd[1]: Started motdgen.path. Mar 17 18:40:44.229729 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:40:44.230529 systemd[1]: Started logrotate.timer. Mar 17 18:40:44.231239 systemd[1]: Started mdadm.timer. Mar 17 18:40:44.231705 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:40:44.232378 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:40:44.232616 systemd[1]: Reached target paths.target. Mar 17 18:40:44.233201 systemd[1]: Reached target timers.target. Mar 17 18:40:44.234269 systemd[1]: Listening on dbus.socket. Mar 17 18:40:44.238988 systemd[1]: Starting docker.socket... Mar 17 18:40:44.246284 systemd[1]: Listening on sshd.socket. Mar 17 18:40:44.247004 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:40:44.247991 systemd[1]: Listening on docker.socket. Mar 17 18:40:44.248847 systemd[1]: Reached target sockets.target. Mar 17 18:40:44.249305 systemd[1]: Reached target basic.target. Mar 17 18:40:44.249808 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:40:44.249862 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:40:44.252161 systemd[1]: Starting containerd.service... Mar 17 18:40:44.254897 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Mar 17 18:40:44.263087 systemd[1]: Starting dbus.service... Mar 17 18:40:44.267464 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:40:44.275698 systemd[1]: Starting extend-filesystems.service... Mar 17 18:40:44.278098 jq[1188]: false Mar 17 18:40:44.277522 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:40:44.283420 systemd[1]: Starting kubelet.service... Mar 17 18:40:44.286583 systemd[1]: Starting motdgen.service... Mar 17 18:40:44.289587 systemd[1]: Starting prepare-helm.service... Mar 17 18:40:44.294785 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:40:44.300397 systemd[1]: Starting sshd-keygen.service... Mar 17 18:40:44.307560 systemd[1]: Starting systemd-logind.service... Mar 17 18:40:44.308626 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:40:44.308767 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:40:44.309844 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 18:40:44.314514 systemd[1]: Starting update-engine.service... Mar 17 18:40:44.319367 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:40:44.330358 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:40:44.330725 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:40:44.895573 systemd-timesyncd[1159]: Contacted time server 45.55.58.103:123 (0.flatcar.pool.ntp.org). Mar 17 18:40:44.895667 systemd-timesyncd[1159]: Initial clock synchronization to Mon 2025-03-17 18:40:44.895378 UTC. Mar 17 18:40:44.914725 jq[1199]: true Mar 17 18:40:44.919076 systemd-resolved[1158]: Clock change detected. Flushing caches. Mar 17 18:40:44.921997 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:40:44.922246 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:40:44.947061 extend-filesystems[1189]: Found loop1 Mar 17 18:40:44.951442 tar[1203]: linux-amd64/helm Mar 17 18:40:44.952665 extend-filesystems[1189]: Found vda Mar 17 18:40:44.956337 extend-filesystems[1189]: Found vda1 Mar 17 18:40:44.960567 extend-filesystems[1189]: Found vda2 Mar 17 18:40:44.967547 extend-filesystems[1189]: Found vda3 Mar 17 18:40:44.968593 jq[1210]: true Mar 17 18:40:44.970736 extend-filesystems[1189]: Found usr Mar 17 18:40:44.973544 extend-filesystems[1189]: Found vda4 Mar 17 18:40:44.973544 extend-filesystems[1189]: Found vda6 Mar 17 18:40:44.973544 extend-filesystems[1189]: Found vda7 Mar 17 18:40:44.973544 extend-filesystems[1189]: Found vda9 Mar 17 18:40:44.973544 extend-filesystems[1189]: Checking size of /dev/vda9 Mar 17 18:40:45.001798 dbus-daemon[1187]: [system] SELinux support is enabled Mar 17 18:40:45.002222 systemd[1]: Started dbus.service. Mar 17 18:40:45.007311 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:40:45.007383 systemd[1]: Reached target system-config.target. Mar 17 18:40:45.008015 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:40:45.008037 systemd[1]: Reached target user-config.target. Mar 17 18:40:45.075607 extend-filesystems[1189]: Resized partition /dev/vda9 Mar 17 18:40:45.080545 update_engine[1198]: I0317 18:40:45.079332 1198 main.cc:92] Flatcar Update Engine starting Mar 17 18:40:45.087182 update_engine[1198]: I0317 18:40:45.087114 1198 update_check_scheduler.cc:74] Next update check in 4m15s Mar 17 18:40:45.087139 systemd[1]: Started update-engine.service. Mar 17 18:40:45.091581 systemd[1]: Started locksmithd.service. Mar 17 18:40:45.102050 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:40:45.102368 systemd[1]: Finished motdgen.service. Mar 17 18:40:45.132754 extend-filesystems[1229]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 18:40:45.164201 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Mar 17 18:40:45.237780 bash[1241]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:40:45.238493 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:40:45.318899 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 17 18:40:45.358027 systemd-logind[1197]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 18:40:45.358060 systemd-logind[1197]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 18:40:45.361794 extend-filesystems[1229]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 18:40:45.361794 extend-filesystems[1229]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 17 18:40:45.361794 extend-filesystems[1229]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 17 18:40:45.358408 systemd-logind[1197]: New seat seat0. Mar 17 18:40:45.379920 env[1205]: time="2025-03-17T18:40:45.363405556Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:40:45.381840 extend-filesystems[1189]: Resized filesystem in /dev/vda9 Mar 17 18:40:45.381840 extend-filesystems[1189]: Found vdb Mar 17 18:40:45.370052 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:40:45.370379 systemd[1]: Finished extend-filesystems.service. Mar 17 18:40:45.373870 systemd[1]: Started systemd-logind.service. Mar 17 18:40:45.507304 coreos-metadata[1184]: Mar 17 18:40:45.502 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 18:40:45.530264 coreos-metadata[1184]: Mar 17 18:40:45.530 INFO Fetch successful Mar 17 18:40:45.549265 unknown[1184]: wrote ssh authorized keys file for user: core Mar 17 18:40:45.587580 update-ssh-keys[1246]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:40:45.588559 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Mar 17 18:40:45.598863 env[1205]: time="2025-03-17T18:40:45.598779773Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:40:45.605552 env[1205]: time="2025-03-17T18:40:45.605471767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:40:45.608536 env[1205]: time="2025-03-17T18:40:45.608449335Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:40:45.610291 env[1205]: time="2025-03-17T18:40:45.610224945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:40:45.610959 env[1205]: time="2025-03-17T18:40:45.610908893Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:40:45.617190 env[1205]: time="2025-03-17T18:40:45.617021033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:40:45.618068 env[1205]: time="2025-03-17T18:40:45.618013040Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:40:45.618291 env[1205]: time="2025-03-17T18:40:45.618254684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:40:45.618593 env[1205]: time="2025-03-17T18:40:45.618564218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:40:45.620489 env[1205]: time="2025-03-17T18:40:45.620443958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:40:45.622627 env[1205]: time="2025-03-17T18:40:45.622567557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:40:45.628437 env[1205]: time="2025-03-17T18:40:45.628298486Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:40:45.628893 env[1205]: time="2025-03-17T18:40:45.628844014Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:40:45.629435 env[1205]: time="2025-03-17T18:40:45.629389486Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:40:45.637166 env[1205]: time="2025-03-17T18:40:45.637065883Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:40:45.637480 env[1205]: time="2025-03-17T18:40:45.637440406Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:40:45.637618 env[1205]: time="2025-03-17T18:40:45.637593925Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:40:45.637835 env[1205]: time="2025-03-17T18:40:45.637791548Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:40:45.637993 env[1205]: time="2025-03-17T18:40:45.637969359Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:40:45.638122 env[1205]: time="2025-03-17T18:40:45.638099575Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:40:45.638261 env[1205]: time="2025-03-17T18:40:45.638239684Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:40:45.638402 env[1205]: time="2025-03-17T18:40:45.638379672Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:40:45.638533 env[1205]: time="2025-03-17T18:40:45.638512000Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:40:45.638683 env[1205]: time="2025-03-17T18:40:45.638660540Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:40:45.638823 env[1205]: time="2025-03-17T18:40:45.638800844Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:40:45.638950 env[1205]: time="2025-03-17T18:40:45.638928001Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:40:45.639433 env[1205]: time="2025-03-17T18:40:45.639361795Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:40:45.639790 env[1205]: time="2025-03-17T18:40:45.639756059Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:40:45.640458 env[1205]: time="2025-03-17T18:40:45.640408110Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:40:45.642311 env[1205]: time="2025-03-17T18:40:45.642264352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:40:45.642511 env[1205]: time="2025-03-17T18:40:45.642483731Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:40:45.642738 env[1205]: time="2025-03-17T18:40:45.642712500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:40:45.642856 env[1205]: time="2025-03-17T18:40:45.642833458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:40:45.642962 env[1205]: time="2025-03-17T18:40:45.642940318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:40:45.643061 env[1205]: time="2025-03-17T18:40:45.643040091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:40:45.643213 env[1205]: time="2025-03-17T18:40:45.643188688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:40:45.643360 env[1205]: time="2025-03-17T18:40:45.643336645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:40:45.643779 env[1205]: time="2025-03-17T18:40:45.643746345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:40:45.643926 env[1205]: time="2025-03-17T18:40:45.643900628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:40:45.644055 env[1205]: time="2025-03-17T18:40:45.644033096Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:40:45.644473 env[1205]: time="2025-03-17T18:40:45.644435693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:40:45.646016 env[1205]: time="2025-03-17T18:40:45.645975236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:40:45.646362 env[1205]: time="2025-03-17T18:40:45.646331097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:40:45.646490 env[1205]: time="2025-03-17T18:40:45.646463362Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:40:45.646621 env[1205]: time="2025-03-17T18:40:45.646591194Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:40:45.646747 env[1205]: time="2025-03-17T18:40:45.646712589Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:40:45.648222 env[1205]: time="2025-03-17T18:40:45.648181442Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:40:45.649117 env[1205]: time="2025-03-17T18:40:45.649078881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:40:45.649756 env[1205]: time="2025-03-17T18:40:45.649660152Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:40:45.652489 env[1205]: time="2025-03-17T18:40:45.651841433Z" level=info msg="Connect containerd service" Mar 17 18:40:45.652698 env[1205]: time="2025-03-17T18:40:45.652661429Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:40:45.654030 env[1205]: time="2025-03-17T18:40:45.653980276Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:40:45.656741 env[1205]: time="2025-03-17T18:40:45.656680485Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:40:45.657069 env[1205]: time="2025-03-17T18:40:45.657028197Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:40:45.658588 env[1205]: time="2025-03-17T18:40:45.658551852Z" level=info msg="containerd successfully booted in 0.339160s" Mar 17 18:40:45.658718 systemd[1]: Started containerd.service. Mar 17 18:40:45.663650 env[1205]: time="2025-03-17T18:40:45.663357840Z" level=info msg="Start subscribing containerd event" Mar 17 18:40:45.666276 env[1205]: time="2025-03-17T18:40:45.666214610Z" level=info msg="Start recovering state" Mar 17 18:40:45.666708 env[1205]: time="2025-03-17T18:40:45.666680773Z" level=info msg="Start event monitor" Mar 17 18:40:45.668310 env[1205]: time="2025-03-17T18:40:45.668239053Z" level=info msg="Start snapshots syncer" Mar 17 18:40:45.668903 env[1205]: time="2025-03-17T18:40:45.668857175Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:40:45.669092 env[1205]: time="2025-03-17T18:40:45.669043102Z" level=info msg="Start streaming server" Mar 17 18:40:46.073465 locksmithd[1230]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:40:46.648538 tar[1203]: linux-amd64/LICENSE Mar 17 18:40:46.648538 tar[1203]: linux-amd64/README.md Mar 17 18:40:46.665399 systemd[1]: Finished prepare-helm.service. Mar 17 18:40:47.134815 systemd[1]: Started kubelet.service. Mar 17 18:40:47.435002 sshd_keygen[1206]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:40:47.479249 systemd[1]: Finished sshd-keygen.service. Mar 17 18:40:47.482603 systemd[1]: Starting issuegen.service... Mar 17 18:40:47.497019 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:40:47.497391 systemd[1]: Finished issuegen.service. Mar 17 18:40:47.500908 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:40:47.516507 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:40:47.520096 systemd[1]: Started getty@tty1.service. Mar 17 18:40:47.524294 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 18:40:47.525516 systemd[1]: Reached target getty.target. Mar 17 18:40:47.526354 systemd[1]: Reached target multi-user.target. Mar 17 18:40:47.531639 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:40:47.550968 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:40:47.551189 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:40:47.551887 systemd[1]: Startup finished in 1.359s (kernel) + 9.200s (initrd) + 13.838s (userspace) = 24.398s. Mar 17 18:40:48.362070 kubelet[1257]: E0317 18:40:48.361986 1257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:40:48.364733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:40:48.364946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:40:48.365362 systemd[1]: kubelet.service: Consumed 1.536s CPU time. Mar 17 18:40:53.778824 systemd[1]: Created slice system-sshd.slice. Mar 17 18:40:53.785658 systemd[1]: Started sshd@0-134.199.210.114:22-139.178.68.195:55530.service. Mar 17 18:40:53.967515 sshd[1278]: Accepted publickey for core from 139.178.68.195 port 55530 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:40:53.973451 sshd[1278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:54.013656 systemd-logind[1197]: New session 1 of user core. Mar 17 18:40:54.020529 systemd[1]: Created slice user-500.slice. Mar 17 18:40:54.025796 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:40:54.091313 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:40:54.103691 systemd[1]: Starting user@500.service... Mar 17 18:40:54.110278 (systemd)[1281]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:54.308626 systemd[1281]: Queued start job for default target default.target. Mar 17 18:40:54.309625 systemd[1281]: Reached target paths.target. Mar 17 18:40:54.309656 systemd[1281]: Reached target sockets.target. Mar 17 18:40:54.309676 systemd[1281]: Reached target timers.target. Mar 17 18:40:54.309696 systemd[1281]: Reached target basic.target. Mar 17 18:40:54.309879 systemd[1]: Started user@500.service. Mar 17 18:40:54.311606 systemd[1]: Started session-1.scope. Mar 17 18:40:54.312400 systemd[1281]: Reached target default.target. Mar 17 18:40:54.312488 systemd[1281]: Startup finished in 182ms. Mar 17 18:40:54.420086 systemd[1]: Started sshd@1-134.199.210.114:22-139.178.68.195:55532.service. Mar 17 18:40:54.558381 sshd[1290]: Accepted publickey for core from 139.178.68.195 port 55532 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:40:54.568056 sshd[1290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:54.592974 systemd[1]: Started session-2.scope. Mar 17 18:40:54.595735 systemd-logind[1197]: New session 2 of user core. Mar 17 18:40:54.707393 sshd[1290]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:54.720730 systemd[1]: Started sshd@2-134.199.210.114:22-139.178.68.195:55540.service. Mar 17 18:40:54.752723 systemd[1]: sshd@1-134.199.210.114:22-139.178.68.195:55532.service: Deactivated successfully. Mar 17 18:40:54.754447 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:40:54.757610 systemd-logind[1197]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:40:54.759231 systemd-logind[1197]: Removed session 2. Mar 17 18:40:54.815589 sshd[1295]: Accepted publickey for core from 139.178.68.195 port 55540 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:40:54.818311 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:54.830250 systemd-logind[1197]: New session 3 of user core. Mar 17 18:40:54.830992 systemd[1]: Started session-3.scope. Mar 17 18:40:54.952753 sshd[1295]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:54.964023 systemd[1]: Started sshd@3-134.199.210.114:22-139.178.68.195:55546.service. Mar 17 18:40:54.969725 systemd[1]: sshd@2-134.199.210.114:22-139.178.68.195:55540.service: Deactivated successfully. Mar 17 18:40:54.971575 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:40:54.975751 systemd-logind[1197]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:40:54.977658 systemd-logind[1197]: Removed session 3. Mar 17 18:40:55.063707 sshd[1301]: Accepted publickey for core from 139.178.68.195 port 55546 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:40:55.071802 sshd[1301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:55.094528 systemd-logind[1197]: New session 4 of user core. Mar 17 18:40:55.096665 systemd[1]: Started session-4.scope. Mar 17 18:40:55.184582 sshd[1301]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:55.201730 systemd[1]: sshd@3-134.199.210.114:22-139.178.68.195:55546.service: Deactivated successfully. Mar 17 18:40:55.203020 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:40:55.208797 systemd-logind[1197]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:40:55.209000 systemd[1]: Started sshd@4-134.199.210.114:22-139.178.68.195:55554.service. Mar 17 18:40:55.213073 systemd-logind[1197]: Removed session 4. Mar 17 18:40:55.308975 sshd[1308]: Accepted publickey for core from 139.178.68.195 port 55554 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:40:55.313820 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:55.330311 systemd[1]: Started session-5.scope. Mar 17 18:40:55.332439 systemd-logind[1197]: New session 5 of user core. Mar 17 18:40:55.452957 sudo[1311]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:40:55.454127 sudo[1311]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:40:55.565450 systemd[1]: Starting docker.service... Mar 17 18:40:55.725198 env[1321]: time="2025-03-17T18:40:55.722439864Z" level=info msg="Starting up" Mar 17 18:40:55.725198 env[1321]: time="2025-03-17T18:40:55.724530753Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:40:55.725198 env[1321]: time="2025-03-17T18:40:55.724569772Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:40:55.725198 env[1321]: time="2025-03-17T18:40:55.724626740Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:40:55.725198 env[1321]: time="2025-03-17T18:40:55.724649526Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:40:55.730627 env[1321]: time="2025-03-17T18:40:55.729464229Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:40:55.730627 env[1321]: time="2025-03-17T18:40:55.729507994Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:40:55.730627 env[1321]: time="2025-03-17T18:40:55.729539378Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:40:55.730627 env[1321]: time="2025-03-17T18:40:55.729557087Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:40:55.820982 env[1321]: time="2025-03-17T18:40:55.820043761Z" level=info msg="Loading containers: start." Mar 17 18:40:56.144342 kernel: Initializing XFRM netlink socket Mar 17 18:40:56.239433 env[1321]: time="2025-03-17T18:40:56.239362626Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:40:56.462106 systemd-networkd[1015]: docker0: Link UP Mar 17 18:40:56.500355 env[1321]: time="2025-03-17T18:40:56.495920475Z" level=info msg="Loading containers: done." Mar 17 18:40:56.539296 env[1321]: time="2025-03-17T18:40:56.538008982Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:40:56.539296 env[1321]: time="2025-03-17T18:40:56.538584723Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:40:56.539296 env[1321]: time="2025-03-17T18:40:56.538790186Z" level=info msg="Daemon has completed initialization" Mar 17 18:40:56.580596 systemd[1]: Started docker.service. Mar 17 18:40:56.595102 env[1321]: time="2025-03-17T18:40:56.594972245Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:40:56.629441 systemd[1]: Starting coreos-metadata.service... Mar 17 18:40:56.720649 coreos-metadata[1438]: Mar 17 18:40:56.720 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 18:40:56.758093 coreos-metadata[1438]: Mar 17 18:40:56.757 INFO Fetch successful Mar 17 18:40:56.781325 systemd[1]: Finished coreos-metadata.service. Mar 17 18:40:58.373825 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:40:58.374242 systemd[1]: Stopped kubelet.service. Mar 17 18:40:58.374322 systemd[1]: kubelet.service: Consumed 1.536s CPU time. Mar 17 18:40:58.382699 systemd[1]: Starting kubelet.service... Mar 17 18:40:58.456946 env[1205]: time="2025-03-17T18:40:58.456771315Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 17 18:40:58.654408 systemd[1]: Started kubelet.service. Mar 17 18:40:58.811020 kubelet[1460]: E0317 18:40:58.810950 1460 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:40:58.821352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:40:58.821612 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:40:59.333966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount632342219.mount: Deactivated successfully. Mar 17 18:41:01.792127 env[1205]: time="2025-03-17T18:41:01.787389733Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:01.792127 env[1205]: time="2025-03-17T18:41:01.791241674Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:01.795341 env[1205]: time="2025-03-17T18:41:01.795264803Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:01.804136 env[1205]: time="2025-03-17T18:41:01.804050349Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:01.806253 env[1205]: time="2025-03-17T18:41:01.806164336Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\"" Mar 17 18:41:01.809167 env[1205]: time="2025-03-17T18:41:01.809023800Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 17 18:41:04.376914 env[1205]: time="2025-03-17T18:41:04.376820288Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:04.379602 env[1205]: time="2025-03-17T18:41:04.379458447Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:04.384916 env[1205]: time="2025-03-17T18:41:04.384844063Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:04.388685 env[1205]: time="2025-03-17T18:41:04.388604597Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:04.391561 env[1205]: time="2025-03-17T18:41:04.391481812Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\"" Mar 17 18:41:04.393484 env[1205]: time="2025-03-17T18:41:04.393424287Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 17 18:41:06.477206 env[1205]: time="2025-03-17T18:41:06.477118480Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:06.484058 env[1205]: time="2025-03-17T18:41:06.480497361Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:06.489925 env[1205]: time="2025-03-17T18:41:06.489843229Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:06.495614 env[1205]: time="2025-03-17T18:41:06.495535502Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:06.496341 env[1205]: time="2025-03-17T18:41:06.496296879Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\"" Mar 17 18:41:06.498367 env[1205]: time="2025-03-17T18:41:06.498317023Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 18:41:08.118259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2051031258.mount: Deactivated successfully. Mar 17 18:41:08.873605 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:41:08.873959 systemd[1]: Stopped kubelet.service. Mar 17 18:41:08.877120 systemd[1]: Starting kubelet.service... Mar 17 18:41:09.061058 systemd[1]: Started kubelet.service. Mar 17 18:41:09.166322 kubelet[1472]: E0317 18:41:09.166167 1472 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:41:09.169023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:41:09.169269 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:41:09.282995 env[1205]: time="2025-03-17T18:41:09.282901736Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:09.286593 env[1205]: time="2025-03-17T18:41:09.286494774Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:09.293137 env[1205]: time="2025-03-17T18:41:09.293067236Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:09.296983 env[1205]: time="2025-03-17T18:41:09.296901879Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:09.297868 env[1205]: time="2025-03-17T18:41:09.297809546Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\"" Mar 17 18:41:09.298805 env[1205]: time="2025-03-17T18:41:09.298755220Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:41:09.913095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2840813070.mount: Deactivated successfully. Mar 17 18:41:11.829123 env[1205]: time="2025-03-17T18:41:11.828944464Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:11.834552 env[1205]: time="2025-03-17T18:41:11.834474734Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:11.852540 env[1205]: time="2025-03-17T18:41:11.852452428Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:11.862169 env[1205]: time="2025-03-17T18:41:11.862022807Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 18:41:11.863376 env[1205]: time="2025-03-17T18:41:11.863232635Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:11.864393 env[1205]: time="2025-03-17T18:41:11.863457961Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 18:41:12.442632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4278854657.mount: Deactivated successfully. Mar 17 18:41:12.451357 env[1205]: time="2025-03-17T18:41:12.450976081Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:12.453977 env[1205]: time="2025-03-17T18:41:12.453904039Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:12.466131 env[1205]: time="2025-03-17T18:41:12.466037411Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:12.471132 env[1205]: time="2025-03-17T18:41:12.471042984Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:12.473649 env[1205]: time="2025-03-17T18:41:12.473582515Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 17 18:41:12.476233 env[1205]: time="2025-03-17T18:41:12.476136111Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 17 18:41:13.051472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount174638993.mount: Deactivated successfully. Mar 17 18:41:16.650298 env[1205]: time="2025-03-17T18:41:16.643371381Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:16.670422 env[1205]: time="2025-03-17T18:41:16.670238427Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:16.691116 env[1205]: time="2025-03-17T18:41:16.689589151Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Mar 17 18:41:16.691116 env[1205]: time="2025-03-17T18:41:16.683975437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:16.693899 env[1205]: time="2025-03-17T18:41:16.691991519Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:19.379618 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 18:41:19.380362 systemd[1]: Stopped kubelet.service. Mar 17 18:41:19.386721 systemd[1]: Starting kubelet.service... Mar 17 18:41:19.620269 systemd[1]: Started kubelet.service. Mar 17 18:41:19.707852 kubelet[1500]: E0317 18:41:19.707681 1500 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:41:19.710841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:41:19.711049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:41:21.519514 systemd[1]: Stopped kubelet.service. Mar 17 18:41:21.532306 systemd[1]: Starting kubelet.service... Mar 17 18:41:21.615271 systemd[1]: Reloading. Mar 17 18:41:21.865180 /usr/lib/systemd/system-generators/torcx-generator[1532]: time="2025-03-17T18:41:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:41:21.865241 /usr/lib/systemd/system-generators/torcx-generator[1532]: time="2025-03-17T18:41:21Z" level=info msg="torcx already run" Mar 17 18:41:22.182653 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:41:22.182686 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:41:22.229987 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:41:22.451813 systemd[1]: Started kubelet.service. Mar 17 18:41:22.470741 systemd[1]: Stopping kubelet.service... Mar 17 18:41:22.477798 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:41:22.478200 systemd[1]: Stopped kubelet.service. Mar 17 18:41:22.483569 systemd[1]: Starting kubelet.service... Mar 17 18:41:22.717642 systemd[1]: Started kubelet.service. Mar 17 18:41:22.835287 kubelet[1591]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:41:22.835287 kubelet[1591]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:41:22.835287 kubelet[1591]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:41:22.838818 kubelet[1591]: I0317 18:41:22.838669 1591 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:41:23.192802 kubelet[1591]: I0317 18:41:23.192472 1591 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 18:41:23.192802 kubelet[1591]: I0317 18:41:23.192551 1591 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:41:23.193195 kubelet[1591]: I0317 18:41:23.193052 1591 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 18:41:23.259441 kubelet[1591]: E0317 18:41:23.259382 1591 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://134.199.210.114:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 134.199.210.114:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:41:23.260160 kubelet[1591]: I0317 18:41:23.260078 1591 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:41:23.296101 kubelet[1591]: E0317 18:41:23.296051 1591 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:41:23.296403 kubelet[1591]: I0317 18:41:23.296382 1591 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:41:23.308286 kubelet[1591]: I0317 18:41:23.308246 1591 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:41:23.311020 kubelet[1591]: I0317 18:41:23.310951 1591 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 18:41:23.311837 kubelet[1591]: I0317 18:41:23.311761 1591 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:41:23.312360 kubelet[1591]: I0317 18:41:23.312059 1591 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-d-b51ee9817d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:41:23.312675 kubelet[1591]: I0317 18:41:23.312649 1591 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:41:23.312800 kubelet[1591]: I0317 18:41:23.312783 1591 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 18:41:23.313055 kubelet[1591]: I0317 18:41:23.313035 1591 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:41:23.320010 kubelet[1591]: I0317 18:41:23.319948 1591 kubelet.go:408] "Attempting to sync node with API server" Mar 17 18:41:23.320366 kubelet[1591]: I0317 18:41:23.320325 1591 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:41:23.320562 kubelet[1591]: I0317 18:41:23.320540 1591 kubelet.go:314] "Adding apiserver pod source" Mar 17 18:41:23.320732 kubelet[1591]: I0317 18:41:23.320712 1591 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:41:23.328855 kubelet[1591]: W0317 18:41:23.325411 1591 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://134.199.210.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-d-b51ee9817d&limit=500&resourceVersion=0": dial tcp 134.199.210.114:6443: connect: connection refused Mar 17 18:41:23.328855 kubelet[1591]: E0317 18:41:23.325553 1591 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://134.199.210.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-d-b51ee9817d&limit=500&resourceVersion=0\": dial tcp 134.199.210.114:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:41:23.345259 kubelet[1591]: W0317 18:41:23.345010 1591 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://134.199.210.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 134.199.210.114:6443: connect: connection refused Mar 17 18:41:23.345259 kubelet[1591]: E0317 18:41:23.345123 1591 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://134.199.210.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 134.199.210.114:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:41:23.351370 kubelet[1591]: I0317 18:41:23.350948 1591 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:41:23.357450 kubelet[1591]: I0317 18:41:23.357018 1591 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:41:23.368603 kubelet[1591]: W0317 18:41:23.358479 1591 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:41:23.368603 kubelet[1591]: I0317 18:41:23.368269 1591 server.go:1269] "Started kubelet" Mar 17 18:41:23.381157 kubelet[1591]: I0317 18:41:23.380873 1591 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:41:23.382927 kubelet[1591]: I0317 18:41:23.382492 1591 server.go:460] "Adding debug handlers to kubelet server" Mar 17 18:41:23.385047 kubelet[1591]: I0317 18:41:23.383994 1591 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:41:23.385047 kubelet[1591]: I0317 18:41:23.384689 1591 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:41:23.390542 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:41:23.390968 kubelet[1591]: I0317 18:41:23.390928 1591 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:41:23.394723 kubelet[1591]: E0317 18:41:23.387948 1591 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://134.199.210.114:6443/api/v1/namespaces/default/events\": dial tcp 134.199.210.114:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-d-b51ee9817d.182dab36f3bed6ca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-d-b51ee9817d,UID:ci-3510.3.7-d-b51ee9817d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-d-b51ee9817d,},FirstTimestamp:2025-03-17 18:41:23.368203978 +0000 UTC m=+0.630451311,LastTimestamp:2025-03-17 18:41:23.368203978 +0000 UTC m=+0.630451311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-d-b51ee9817d,}" Mar 17 18:41:23.395916 kubelet[1591]: I0317 18:41:23.395436 1591 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:41:23.403197 kubelet[1591]: I0317 18:41:23.403130 1591 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 18:41:23.404720 kubelet[1591]: E0317 18:41:23.404420 1591 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-d-b51ee9817d\" not found" Mar 17 18:41:23.405660 kubelet[1591]: I0317 18:41:23.405598 1591 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 18:41:23.405839 kubelet[1591]: I0317 18:41:23.405715 1591 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:41:23.407898 kubelet[1591]: W0317 18:41:23.407070 1591 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://134.199.210.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 134.199.210.114:6443: connect: connection refused Mar 17 18:41:23.407898 kubelet[1591]: E0317 18:41:23.407795 1591 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://134.199.210.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 134.199.210.114:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:41:23.408453 kubelet[1591]: E0317 18:41:23.407927 1591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.210.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-d-b51ee9817d?timeout=10s\": dial tcp 134.199.210.114:6443: connect: connection refused" interval="200ms" Mar 17 18:41:23.409441 kubelet[1591]: E0317 18:41:23.409111 1591 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:41:23.409885 kubelet[1591]: I0317 18:41:23.409714 1591 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:41:23.413650 kubelet[1591]: I0317 18:41:23.413602 1591 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:41:23.413926 kubelet[1591]: I0317 18:41:23.413903 1591 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:41:23.470480 kubelet[1591]: I0317 18:41:23.470231 1591 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:41:23.471008 kubelet[1591]: I0317 18:41:23.470982 1591 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:41:23.471207 kubelet[1591]: I0317 18:41:23.471188 1591 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:41:23.471325 kubelet[1591]: I0317 18:41:23.471309 1591 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:41:23.474405 kubelet[1591]: I0317 18:41:23.474357 1591 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:41:23.474647 kubelet[1591]: I0317 18:41:23.474628 1591 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:41:23.474779 kubelet[1591]: I0317 18:41:23.474763 1591 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 18:41:23.474981 kubelet[1591]: E0317 18:41:23.474932 1591 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:41:23.476927 kubelet[1591]: I0317 18:41:23.476885 1591 policy_none.go:49] "None policy: Start" Mar 17 18:41:23.482784 kubelet[1591]: W0317 18:41:23.482719 1591 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://134.199.210.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 134.199.210.114:6443: connect: connection refused Mar 17 18:41:23.483099 kubelet[1591]: E0317 18:41:23.483071 1591 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://134.199.210.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 134.199.210.114:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:41:23.483423 kubelet[1591]: I0317 18:41:23.483396 1591 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:41:23.483564 kubelet[1591]: I0317 18:41:23.483548 1591 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:41:23.492437 systemd[1]: Created slice kubepods.slice. Mar 17 18:41:23.503205 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 18:41:23.505068 kubelet[1591]: E0317 18:41:23.504990 1591 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-d-b51ee9817d\" not found" Mar 17 18:41:23.511994 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 18:41:23.521363 kubelet[1591]: I0317 18:41:23.521315 1591 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:41:23.522444 kubelet[1591]: I0317 18:41:23.522412 1591 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:41:23.522701 kubelet[1591]: I0317 18:41:23.522639 1591 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:41:23.524663 kubelet[1591]: I0317 18:41:23.524629 1591 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:41:23.533977 kubelet[1591]: E0317 18:41:23.528102 1591 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-d-b51ee9817d\" not found" Mar 17 18:41:23.588536 systemd[1]: Created slice kubepods-burstable-podbdea666cb311bb0352f4d2535325b073.slice. Mar 17 18:41:23.606495 systemd[1]: Created slice kubepods-burstable-pod73b4833e811dcdb4f6eef6709ce4627f.slice. Mar 17 18:41:23.608864 kubelet[1591]: E0317 18:41:23.608800 1591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.210.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-d-b51ee9817d?timeout=10s\": dial tcp 134.199.210.114:6443: connect: connection refused" interval="400ms" Mar 17 18:41:23.617576 systemd[1]: Created slice kubepods-burstable-pod3b2d7c875010ca150422d5407e5476ec.slice. Mar 17 18:41:23.626585 kubelet[1591]: I0317 18:41:23.626513 1591 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:23.627859 kubelet[1591]: E0317 18:41:23.627805 1591 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://134.199.210.114:6443/api/v1/nodes\": dial tcp 134.199.210.114:6443: connect: connection refused" node="ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:23.707341 kubelet[1591]: I0317 18:41:23.707283 1591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b2d7c875010ca150422d5407e5476ec-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-d-b51ee9817d\" (UID: \"3b2d7c875010ca150422d5407e5476ec\") " pod="kube-system/kube-apiserver-ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:23.707877 kubelet[1591]: I0317 18:41:23.707847 1591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73b4833e811dcdb4f6eef6709ce4627f-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-d-b51ee9817d\" (UID: \"73b4833e811dcdb4f6eef6709ce4627f\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:23.708111 kubelet[1591]: I0317 18:41:23.708067 1591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73b4833e811dcdb4f6eef6709ce4627f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-d-b51ee9817d\" (UID: \"73b4833e811dcdb4f6eef6709ce4627f\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:23.708275 kubelet[1591]: I0317 18:41:23.708258 1591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73b4833e811dcdb4f6eef6709ce4627f-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-d-b51ee9817d\" (UID: \"73b4833e811dcdb4f6eef6709ce4627f\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:23.708458 kubelet[1591]: I0317 18:41:23.708441 1591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73b4833e811dcdb4f6eef6709ce4627f-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-d-b51ee9817d\" (UID: \"73b4833e811dcdb4f6eef6709ce4627f\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:23.708741 kubelet[1591]: I0317 18:41:23.708719 1591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bdea666cb311bb0352f4d2535325b073-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-d-b51ee9817d\" (UID: \"bdea666cb311bb0352f4d2535325b073\") " pod="kube-system/kube-scheduler-ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:23.708945 kubelet[1591]: I0317 18:41:23.708892 1591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b2d7c875010ca150422d5407e5476ec-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-d-b51ee9817d\" (UID: \"3b2d7c875010ca150422d5407e5476ec\") " pod="kube-system/kube-apiserver-ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:23.709123 kubelet[1591]: I0317 18:41:23.709094 1591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b2d7c875010ca150422d5407e5476ec-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-d-b51ee9817d\" (UID: \"3b2d7c875010ca150422d5407e5476ec\") " pod="kube-system/kube-apiserver-ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:23.709279 kubelet[1591]: I0317 18:41:23.709264 1591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73b4833e811dcdb4f6eef6709ce4627f-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-d-b51ee9817d\" (UID: \"73b4833e811dcdb4f6eef6709ce4627f\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:23.830589 kubelet[1591]: I0317 18:41:23.830445 1591 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:23.833979 kubelet[1591]: E0317 18:41:23.833895 1591 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://134.199.210.114:6443/api/v1/nodes\": dial tcp 134.199.210.114:6443: connect: connection refused" node="ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:23.903171 kubelet[1591]: E0317 18:41:23.903076 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:23.905503 env[1205]: time="2025-03-17T18:41:23.905000890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-d-b51ee9817d,Uid:bdea666cb311bb0352f4d2535325b073,Namespace:kube-system,Attempt:0,}" Mar 17 18:41:23.917644 kubelet[1591]: E0317 18:41:23.916471 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:23.917907 env[1205]: time="2025-03-17T18:41:23.917403506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-d-b51ee9817d,Uid:73b4833e811dcdb4f6eef6709ce4627f,Namespace:kube-system,Attempt:0,}" Mar 17 18:41:23.927328 kubelet[1591]: E0317 18:41:23.927278 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:23.942479 env[1205]: time="2025-03-17T18:41:23.931872456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-d-b51ee9817d,Uid:3b2d7c875010ca150422d5407e5476ec,Namespace:kube-system,Attempt:0,}" Mar 17 18:41:24.009950 kubelet[1591]: E0317 18:41:24.009811 1591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.210.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-d-b51ee9817d?timeout=10s\": dial tcp 134.199.210.114:6443: connect: connection refused" interval="800ms" Mar 17 18:41:24.209313 kubelet[1591]: W0317 18:41:24.209094 1591 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://134.199.210.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 134.199.210.114:6443: connect: connection refused Mar 17 18:41:24.209313 kubelet[1591]: E0317 18:41:24.209252 1591 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://134.199.210.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 134.199.210.114:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:41:24.244832 kubelet[1591]: I0317 18:41:24.243730 1591 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:24.244832 kubelet[1591]: E0317 18:41:24.244740 1591 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://134.199.210.114:6443/api/v1/nodes\": dial tcp 134.199.210.114:6443: connect: connection refused" node="ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:24.437461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1291101402.mount: Deactivated successfully. Mar 17 18:41:24.450034 env[1205]: time="2025-03-17T18:41:24.449950352Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:24.451950 env[1205]: time="2025-03-17T18:41:24.451879474Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:24.461103 env[1205]: time="2025-03-17T18:41:24.460936647Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:24.462295 env[1205]: time="2025-03-17T18:41:24.462243221Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:24.463482 env[1205]: time="2025-03-17T18:41:24.463427427Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:24.472038 env[1205]: time="2025-03-17T18:41:24.471968485Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:24.473710 env[1205]: time="2025-03-17T18:41:24.473655632Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:24.475656 env[1205]: time="2025-03-17T18:41:24.475574932Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:24.479895 env[1205]: time="2025-03-17T18:41:24.479384743Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:24.488196 env[1205]: time="2025-03-17T18:41:24.488051660Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:24.490102 env[1205]: time="2025-03-17T18:41:24.490051586Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:24.498985 env[1205]: time="2025-03-17T18:41:24.498918817Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:24.536506 env[1205]: time="2025-03-17T18:41:24.535051195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:41:24.536506 env[1205]: time="2025-03-17T18:41:24.535139325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:41:24.536506 env[1205]: time="2025-03-17T18:41:24.535172321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:41:24.536506 env[1205]: time="2025-03-17T18:41:24.535440664Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e2fd53edacd87058443206d4b486e7764caefa6401dc9410b277e0c8e4cf36d pid=1628 runtime=io.containerd.runc.v2 Mar 17 18:41:24.604900 systemd[1]: Started cri-containerd-2e2fd53edacd87058443206d4b486e7764caefa6401dc9410b277e0c8e4cf36d.scope. Mar 17 18:41:24.617665 env[1205]: time="2025-03-17T18:41:24.617542118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:41:24.617989 env[1205]: time="2025-03-17T18:41:24.617949861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:41:24.618121 env[1205]: time="2025-03-17T18:41:24.618092023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:41:24.618743 env[1205]: time="2025-03-17T18:41:24.618637414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:41:24.618948 env[1205]: time="2025-03-17T18:41:24.618913616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:41:24.619124 env[1205]: time="2025-03-17T18:41:24.619086344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:41:24.625717 env[1205]: time="2025-03-17T18:41:24.623459682Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c61b10257c4fdf9c84426b51e320eae50ce5cc7f3300f74966a449525fd41590 pid=1651 runtime=io.containerd.runc.v2 Mar 17 18:41:24.626073 env[1205]: time="2025-03-17T18:41:24.620071462Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3412e388d4ad6e40d71ff1cf1c5321107e5e171ccd7b46464a6b14f819d59050 pid=1660 runtime=io.containerd.runc.v2 Mar 17 18:41:24.642500 kubelet[1591]: W0317 18:41:24.642432 1591 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://134.199.210.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 134.199.210.114:6443: connect: connection refused Mar 17 18:41:24.642500 kubelet[1591]: E0317 18:41:24.642501 1591 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://134.199.210.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 134.199.210.114:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:41:24.665369 systemd[1]: Started cri-containerd-3412e388d4ad6e40d71ff1cf1c5321107e5e171ccd7b46464a6b14f819d59050.scope. Mar 17 18:41:24.701842 kubelet[1591]: W0317 18:41:24.701378 1591 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://134.199.210.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 134.199.210.114:6443: connect: connection refused Mar 17 18:41:24.701842 kubelet[1591]: E0317 18:41:24.701506 1591 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://134.199.210.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 134.199.210.114:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:41:24.702552 systemd[1]: Started cri-containerd-c61b10257c4fdf9c84426b51e320eae50ce5cc7f3300f74966a449525fd41590.scope. Mar 17 18:41:24.742116 kubelet[1591]: W0317 18:41:24.741909 1591 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://134.199.210.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-d-b51ee9817d&limit=500&resourceVersion=0": dial tcp 134.199.210.114:6443: connect: connection refused Mar 17 18:41:24.742116 kubelet[1591]: E0317 18:41:24.742022 1591 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://134.199.210.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-d-b51ee9817d&limit=500&resourceVersion=0\": dial tcp 134.199.210.114:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:41:24.799293 env[1205]: time="2025-03-17T18:41:24.799221256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-d-b51ee9817d,Uid:3b2d7c875010ca150422d5407e5476ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"3412e388d4ad6e40d71ff1cf1c5321107e5e171ccd7b46464a6b14f819d59050\"" Mar 17 18:41:24.804920 kubelet[1591]: E0317 18:41:24.803793 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:24.810845 kubelet[1591]: E0317 18:41:24.810769 1591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.210.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-d-b51ee9817d?timeout=10s\": dial tcp 134.199.210.114:6443: connect: connection refused" interval="1.6s" Mar 17 18:41:24.817445 env[1205]: time="2025-03-17T18:41:24.817369096Z" level=info msg="CreateContainer within sandbox \"3412e388d4ad6e40d71ff1cf1c5321107e5e171ccd7b46464a6b14f819d59050\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:41:24.832985 env[1205]: time="2025-03-17T18:41:24.832921022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-d-b51ee9817d,Uid:bdea666cb311bb0352f4d2535325b073,Namespace:kube-system,Attempt:0,} returns sandbox id \"c61b10257c4fdf9c84426b51e320eae50ce5cc7f3300f74966a449525fd41590\"" Mar 17 18:41:24.835303 kubelet[1591]: E0317 18:41:24.835228 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:24.838407 env[1205]: time="2025-03-17T18:41:24.838352973Z" level=info msg="CreateContainer within sandbox \"c61b10257c4fdf9c84426b51e320eae50ce5cc7f3300f74966a449525fd41590\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:41:24.849944 env[1205]: time="2025-03-17T18:41:24.849853974Z" level=info msg="CreateContainer within sandbox \"3412e388d4ad6e40d71ff1cf1c5321107e5e171ccd7b46464a6b14f819d59050\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"489dcbfbebb6b1bc8f701d924d1700c8181c91bb5ce9932bc9950254aad46cf4\"" Mar 17 18:41:24.851052 env[1205]: time="2025-03-17T18:41:24.851014606Z" level=info msg="StartContainer for \"489dcbfbebb6b1bc8f701d924d1700c8181c91bb5ce9932bc9950254aad46cf4\"" Mar 17 18:41:24.854254 env[1205]: time="2025-03-17T18:41:24.854139322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-d-b51ee9817d,Uid:73b4833e811dcdb4f6eef6709ce4627f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e2fd53edacd87058443206d4b486e7764caefa6401dc9410b277e0c8e4cf36d\"" Mar 17 18:41:24.855620 kubelet[1591]: E0317 18:41:24.855552 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:24.858442 env[1205]: time="2025-03-17T18:41:24.858378799Z" level=info msg="CreateContainer within sandbox \"2e2fd53edacd87058443206d4b486e7764caefa6401dc9410b277e0c8e4cf36d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:41:24.875026 env[1205]: time="2025-03-17T18:41:24.874940009Z" level=info msg="CreateContainer within sandbox \"c61b10257c4fdf9c84426b51e320eae50ce5cc7f3300f74966a449525fd41590\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"07af6d818000f967119f928d812846358ec8297e36b701104c87dad6f1030fec\"" Mar 17 18:41:24.875667 env[1205]: time="2025-03-17T18:41:24.875494211Z" level=info msg="CreateContainer within sandbox \"2e2fd53edacd87058443206d4b486e7764caefa6401dc9410b277e0c8e4cf36d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"21cc708501ebc12e5360e06d07cd58f29fc3d28f6a09c683b2b876b17fbb56ad\"" Mar 17 18:41:24.876583 env[1205]: time="2025-03-17T18:41:24.876542742Z" level=info msg="StartContainer for \"21cc708501ebc12e5360e06d07cd58f29fc3d28f6a09c683b2b876b17fbb56ad\"" Mar 17 18:41:24.878420 env[1205]: time="2025-03-17T18:41:24.878367984Z" level=info msg="StartContainer for \"07af6d818000f967119f928d812846358ec8297e36b701104c87dad6f1030fec\"" Mar 17 18:41:24.888944 systemd[1]: Started cri-containerd-489dcbfbebb6b1bc8f701d924d1700c8181c91bb5ce9932bc9950254aad46cf4.scope. Mar 17 18:41:24.969216 systemd[1]: Started cri-containerd-21cc708501ebc12e5360e06d07cd58f29fc3d28f6a09c683b2b876b17fbb56ad.scope. Mar 17 18:41:24.988115 systemd[1]: Started cri-containerd-07af6d818000f967119f928d812846358ec8297e36b701104c87dad6f1030fec.scope. Mar 17 18:41:25.046958 kubelet[1591]: I0317 18:41:25.046813 1591 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:25.047602 kubelet[1591]: E0317 18:41:25.047288 1591 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://134.199.210.114:6443/api/v1/nodes\": dial tcp 134.199.210.114:6443: connect: connection refused" node="ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:25.067991 env[1205]: time="2025-03-17T18:41:25.067795065Z" level=info msg="StartContainer for \"489dcbfbebb6b1bc8f701d924d1700c8181c91bb5ce9932bc9950254aad46cf4\" returns successfully" Mar 17 18:41:25.105595 env[1205]: time="2025-03-17T18:41:25.105524272Z" level=info msg="StartContainer for \"21cc708501ebc12e5360e06d07cd58f29fc3d28f6a09c683b2b876b17fbb56ad\" returns successfully" Mar 17 18:41:25.123236 kubelet[1591]: E0317 18:41:25.122918 1591 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://134.199.210.114:6443/api/v1/namespaces/default/events\": dial tcp 134.199.210.114:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-d-b51ee9817d.182dab36f3bed6ca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-d-b51ee9817d,UID:ci-3510.3.7-d-b51ee9817d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-d-b51ee9817d,},FirstTimestamp:2025-03-17 18:41:23.368203978 +0000 UTC m=+0.630451311,LastTimestamp:2025-03-17 18:41:23.368203978 +0000 UTC m=+0.630451311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-d-b51ee9817d,}" Mar 17 18:41:25.148270 env[1205]: time="2025-03-17T18:41:25.148187498Z" level=info msg="StartContainer for \"07af6d818000f967119f928d812846358ec8297e36b701104c87dad6f1030fec\" returns successfully" Mar 17 18:41:25.494687 kubelet[1591]: E0317 18:41:25.494621 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:25.498887 kubelet[1591]: E0317 18:41:25.498817 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:25.503226 kubelet[1591]: E0317 18:41:25.503180 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:26.505228 kubelet[1591]: E0317 18:41:26.505163 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:26.651865 kubelet[1591]: I0317 18:41:26.649488 1591 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:28.205917 kubelet[1591]: E0317 18:41:28.205863 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:28.755992 kubelet[1591]: E0317 18:41:28.755862 1591 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-d-b51ee9817d\" not found" node="ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:28.857502 kubelet[1591]: I0317 18:41:28.857359 1591 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:29.329594 kubelet[1591]: I0317 18:41:29.329540 1591 apiserver.go:52] "Watching apiserver" Mar 17 18:41:29.406807 kubelet[1591]: I0317 18:41:29.406741 1591 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 18:41:30.164985 update_engine[1198]: I0317 18:41:30.157542 1198 update_attempter.cc:509] Updating boot flags... Mar 17 18:41:31.781477 systemd[1]: Reloading. Mar 17 18:41:31.906506 /usr/lib/systemd/system-generators/torcx-generator[1899]: time="2025-03-17T18:41:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:41:31.907273 /usr/lib/systemd/system-generators/torcx-generator[1899]: time="2025-03-17T18:41:31Z" level=info msg="torcx already run" Mar 17 18:41:32.128335 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:41:32.128731 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:41:32.170622 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:41:32.368267 systemd[1]: Stopping kubelet.service... Mar 17 18:41:32.395105 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:41:32.395948 systemd[1]: Stopped kubelet.service. Mar 17 18:41:32.396305 systemd[1]: kubelet.service: Consumed 1.062s CPU time. Mar 17 18:41:32.406049 systemd[1]: Starting kubelet.service... Mar 17 18:41:34.069275 systemd[1]: Started kubelet.service. Mar 17 18:41:34.220967 kubelet[1951]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:41:34.221576 kubelet[1951]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:41:34.221741 kubelet[1951]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:41:34.222079 kubelet[1951]: I0317 18:41:34.222020 1951 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:41:34.248459 kubelet[1951]: I0317 18:41:34.248397 1951 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 18:41:34.248725 kubelet[1951]: I0317 18:41:34.248694 1951 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:41:34.249972 kubelet[1951]: I0317 18:41:34.249921 1951 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 18:41:34.253275 kubelet[1951]: I0317 18:41:34.253225 1951 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:41:34.259130 kubelet[1951]: I0317 18:41:34.259080 1951 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:41:34.280692 kubelet[1951]: E0317 18:41:34.280610 1951 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:41:34.281089 kubelet[1951]: I0317 18:41:34.280959 1951 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:41:34.281675 sudo[1964]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:41:34.282413 sudo[1964]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 18:41:34.303188 kubelet[1951]: I0317 18:41:34.302801 1951 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:41:34.303188 kubelet[1951]: I0317 18:41:34.302997 1951 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 18:41:34.305759 kubelet[1951]: I0317 18:41:34.304947 1951 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:41:34.305759 kubelet[1951]: I0317 18:41:34.305048 1951 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-d-b51ee9817d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:41:34.305759 kubelet[1951]: I0317 18:41:34.305418 1951 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:41:34.305759 kubelet[1951]: I0317 18:41:34.305437 1951 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 18:41:34.307893 kubelet[1951]: I0317 18:41:34.305646 1951 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:41:34.307893 kubelet[1951]: I0317 18:41:34.305845 1951 kubelet.go:408] "Attempting to sync node with API server" Mar 17 18:41:34.307893 kubelet[1951]: I0317 18:41:34.305872 1951 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:41:34.307893 kubelet[1951]: I0317 18:41:34.305917 1951 kubelet.go:314] "Adding apiserver pod source" Mar 17 18:41:34.307893 kubelet[1951]: I0317 18:41:34.305942 1951 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:41:34.311888 kubelet[1951]: I0317 18:41:34.311815 1951 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:41:34.313698 kubelet[1951]: I0317 18:41:34.313654 1951 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:41:34.317017 kubelet[1951]: I0317 18:41:34.316981 1951 server.go:1269] "Started kubelet" Mar 17 18:41:34.322296 kubelet[1951]: I0317 18:41:34.322059 1951 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:41:34.345469 kubelet[1951]: I0317 18:41:34.337542 1951 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:41:34.347629 kubelet[1951]: I0317 18:41:34.347578 1951 server.go:460] "Adding debug handlers to kubelet server" Mar 17 18:41:34.355898 kubelet[1951]: I0317 18:41:34.355775 1951 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:41:34.356538 kubelet[1951]: I0317 18:41:34.356512 1951 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:41:34.361004 kubelet[1951]: I0317 18:41:34.360938 1951 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:41:34.383488 kubelet[1951]: I0317 18:41:34.383380 1951 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:41:34.386667 kubelet[1951]: I0317 18:41:34.366026 1951 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 18:41:34.386667 kubelet[1951]: E0317 18:41:34.366968 1951 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-d-b51ee9817d\" not found" Mar 17 18:41:34.396671 kubelet[1951]: I0317 18:41:34.365994 1951 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 18:41:34.403225 kubelet[1951]: I0317 18:41:34.403189 1951 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:41:34.405330 kubelet[1951]: I0317 18:41:34.405289 1951 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:41:34.405672 kubelet[1951]: I0317 18:41:34.405653 1951 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:41:34.420648 kubelet[1951]: E0317 18:41:34.420573 1951 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:41:34.461389 kubelet[1951]: I0317 18:41:34.461331 1951 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:41:34.469230 kubelet[1951]: I0317 18:41:34.469125 1951 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:41:34.469486 kubelet[1951]: I0317 18:41:34.469468 1951 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:41:34.469633 kubelet[1951]: I0317 18:41:34.469611 1951 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 18:41:34.469798 kubelet[1951]: E0317 18:41:34.469773 1951 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:41:34.564985 kubelet[1951]: I0317 18:41:34.564926 1951 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:41:34.564985 kubelet[1951]: I0317 18:41:34.564956 1951 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:41:34.564985 kubelet[1951]: I0317 18:41:34.565008 1951 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:41:34.565427 kubelet[1951]: I0317 18:41:34.565397 1951 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:41:34.565526 kubelet[1951]: I0317 18:41:34.565420 1951 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:41:34.565526 kubelet[1951]: I0317 18:41:34.565449 1951 policy_none.go:49] "None policy: Start" Mar 17 18:41:34.568611 kubelet[1951]: I0317 18:41:34.568568 1951 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:41:34.568611 kubelet[1951]: I0317 18:41:34.568616 1951 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:41:34.569081 kubelet[1951]: I0317 18:41:34.569053 1951 state_mem.go:75] "Updated machine memory state" Mar 17 18:41:34.570051 kubelet[1951]: E0317 18:41:34.569951 1951 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:41:34.579336 kubelet[1951]: I0317 18:41:34.579292 1951 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:41:34.582110 kubelet[1951]: I0317 18:41:34.581731 1951 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:41:34.582110 kubelet[1951]: I0317 18:41:34.581755 1951 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:41:34.584528 kubelet[1951]: I0317 18:41:34.583005 1951 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:41:34.701010 kubelet[1951]: I0317 18:41:34.700956 1951 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:34.727570 kubelet[1951]: I0317 18:41:34.727495 1951 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:34.727994 kubelet[1951]: I0317 18:41:34.727951 1951 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:34.805549 kubelet[1951]: I0317 18:41:34.805472 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b2d7c875010ca150422d5407e5476ec-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-d-b51ee9817d\" (UID: \"3b2d7c875010ca150422d5407e5476ec\") " pod="kube-system/kube-apiserver-ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:34.805780 kubelet[1951]: I0317 18:41:34.805615 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73b4833e811dcdb4f6eef6709ce4627f-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-d-b51ee9817d\" (UID: \"73b4833e811dcdb4f6eef6709ce4627f\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:34.805990 kubelet[1951]: I0317 18:41:34.805721 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73b4833e811dcdb4f6eef6709ce4627f-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-d-b51ee9817d\" (UID: \"73b4833e811dcdb4f6eef6709ce4627f\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:34.806084 kubelet[1951]: I0317 18:41:34.806056 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73b4833e811dcdb4f6eef6709ce4627f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-d-b51ee9817d\" (UID: \"73b4833e811dcdb4f6eef6709ce4627f\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:34.806166 kubelet[1951]: I0317 18:41:34.806097 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bdea666cb311bb0352f4d2535325b073-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-d-b51ee9817d\" (UID: \"bdea666cb311bb0352f4d2535325b073\") " pod="kube-system/kube-scheduler-ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:34.806245 kubelet[1951]: I0317 18:41:34.806218 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b2d7c875010ca150422d5407e5476ec-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-d-b51ee9817d\" (UID: \"3b2d7c875010ca150422d5407e5476ec\") " pod="kube-system/kube-apiserver-ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:34.806309 kubelet[1951]: I0317 18:41:34.806260 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b2d7c875010ca150422d5407e5476ec-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-d-b51ee9817d\" (UID: \"3b2d7c875010ca150422d5407e5476ec\") " pod="kube-system/kube-apiserver-ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:34.808545 kubelet[1951]: I0317 18:41:34.808478 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73b4833e811dcdb4f6eef6709ce4627f-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-d-b51ee9817d\" (UID: \"73b4833e811dcdb4f6eef6709ce4627f\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:34.808802 kubelet[1951]: I0317 18:41:34.808572 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73b4833e811dcdb4f6eef6709ce4627f-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-d-b51ee9817d\" (UID: \"73b4833e811dcdb4f6eef6709ce4627f\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:34.817391 kubelet[1951]: W0317 18:41:34.817290 1951 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:41:34.826543 kubelet[1951]: W0317 18:41:34.826475 1951 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:41:34.827898 kubelet[1951]: W0317 18:41:34.827848 1951 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:41:35.119107 kubelet[1951]: E0317 18:41:35.118900 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:35.127941 kubelet[1951]: E0317 18:41:35.127892 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:35.129447 kubelet[1951]: E0317 18:41:35.129216 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:35.315633 kubelet[1951]: I0317 18:41:35.310676 1951 apiserver.go:52] "Watching apiserver" Mar 17 18:41:35.386551 kubelet[1951]: I0317 18:41:35.386485 1951 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 18:41:35.522795 kubelet[1951]: E0317 18:41:35.522741 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:35.523419 kubelet[1951]: E0317 18:41:35.523378 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:35.525238 sudo[1964]: pam_unix(sudo:session): session closed for user root Mar 17 18:41:35.552712 kubelet[1951]: W0317 18:41:35.552652 1951 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:41:35.553113 kubelet[1951]: E0317 18:41:35.553083 1951 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.7-d-b51ee9817d\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-d-b51ee9817d" Mar 17 18:41:35.553432 kubelet[1951]: E0317 18:41:35.553414 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:35.676606 kubelet[1951]: I0317 18:41:35.676372 1951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-d-b51ee9817d" podStartSLOduration=1.676339217 podStartE2EDuration="1.676339217s" podCreationTimestamp="2025-03-17 18:41:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:41:35.624854918 +0000 UTC m=+1.528430980" watchObservedRunningTime="2025-03-17 18:41:35.676339217 +0000 UTC m=+1.579915279" Mar 17 18:41:35.724651 kubelet[1951]: I0317 18:41:35.724585 1951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-d-b51ee9817d" podStartSLOduration=1.724541618 podStartE2EDuration="1.724541618s" podCreationTimestamp="2025-03-17 18:41:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:41:35.678447981 +0000 UTC m=+1.582024043" watchObservedRunningTime="2025-03-17 18:41:35.724541618 +0000 UTC m=+1.628117683" Mar 17 18:41:36.266015 kubelet[1951]: I0317 18:41:36.265942 1951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-d-b51ee9817d" podStartSLOduration=2.265918173 podStartE2EDuration="2.265918173s" podCreationTimestamp="2025-03-17 18:41:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:41:35.726180707 +0000 UTC m=+1.629756769" watchObservedRunningTime="2025-03-17 18:41:36.265918173 +0000 UTC m=+2.169494235" Mar 17 18:41:36.525855 kubelet[1951]: E0317 18:41:36.525696 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:36.526658 kubelet[1951]: E0317 18:41:36.526621 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:37.107267 kubelet[1951]: E0317 18:41:37.105932 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:37.536329 kubelet[1951]: E0317 18:41:37.536279 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:37.537889 kubelet[1951]: E0317 18:41:37.537827 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:37.610175 kubelet[1951]: I0317 18:41:37.610108 1951 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:41:37.611120 env[1205]: time="2025-03-17T18:41:37.611043065Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:41:37.612164 kubelet[1951]: I0317 18:41:37.612119 1951 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:41:38.344761 systemd[1]: Created slice kubepods-besteffort-pod2933feb7_6787_48ee_a2f0_b8465a54af1a.slice. Mar 17 18:41:38.357404 systemd[1]: Created slice kubepods-burstable-pod6d95c88c_1ae9_4400_ad3e_55e2633737f0.slice. Mar 17 18:41:38.367900 kubelet[1951]: W0317 18:41:38.367809 1951 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.7-d-b51ee9817d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-d-b51ee9817d' and this object Mar 17 18:41:38.368166 kubelet[1951]: E0317 18:41:38.367933 1951 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-3510.3.7-d-b51ee9817d\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-d-b51ee9817d' and this object" logger="UnhandledError" Mar 17 18:41:38.368166 kubelet[1951]: W0317 18:41:38.367807 1951 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.7-d-b51ee9817d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-d-b51ee9817d' and this object Mar 17 18:41:38.368166 kubelet[1951]: E0317 18:41:38.367980 1951 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-3510.3.7-d-b51ee9817d\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-d-b51ee9817d' and this object" logger="UnhandledError" Mar 17 18:41:38.368716 kubelet[1951]: W0317 18:41:38.368675 1951 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.7-d-b51ee9817d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-d-b51ee9817d' and this object Mar 17 18:41:38.369018 kubelet[1951]: E0317 18:41:38.368960 1951 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-3510.3.7-d-b51ee9817d\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-d-b51ee9817d' and this object" logger="UnhandledError" Mar 17 18:41:38.370570 kubelet[1951]: W0317 18:41:38.370540 1951 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.7-d-b51ee9817d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-d-b51ee9817d' and this object Mar 17 18:41:38.370890 kubelet[1951]: E0317 18:41:38.370836 1951 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-3510.3.7-d-b51ee9817d\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-d-b51ee9817d' and this object" logger="UnhandledError" Mar 17 18:41:38.412719 systemd[1]: Created slice kubepods-besteffort-pod9d248573_527e_479a_8562_24efc3702407.slice. Mar 17 18:41:38.445661 kubelet[1951]: I0317 18:41:38.445566 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-etc-cni-netd\") pod \"cilium-db9z2\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " pod="kube-system/cilium-db9z2" Mar 17 18:41:38.445661 kubelet[1951]: I0317 18:41:38.445635 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-host-proc-sys-net\") pod \"cilium-db9z2\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " pod="kube-system/cilium-db9z2" Mar 17 18:41:38.445661 kubelet[1951]: I0317 18:41:38.445669 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2933feb7-6787-48ee-a2f0-b8465a54af1a-lib-modules\") pod \"kube-proxy-pjqxb\" (UID: \"2933feb7-6787-48ee-a2f0-b8465a54af1a\") " pod="kube-system/kube-proxy-pjqxb" Mar 17 18:41:38.446037 kubelet[1951]: I0317 18:41:38.445691 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-cilium-cgroup\") pod \"cilium-db9z2\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " pod="kube-system/cilium-db9z2" Mar 17 18:41:38.446037 kubelet[1951]: I0317 18:41:38.445714 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-cni-path\") pod \"cilium-db9z2\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " pod="kube-system/cilium-db9z2" Mar 17 18:41:38.446037 kubelet[1951]: I0317 18:41:38.445737 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d95c88c-1ae9-4400-ad3e-55e2633737f0-cilium-config-path\") pod \"cilium-db9z2\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " pod="kube-system/cilium-db9z2" Mar 17 18:41:38.446037 kubelet[1951]: I0317 18:41:38.445758 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2933feb7-6787-48ee-a2f0-b8465a54af1a-kube-proxy\") pod \"kube-proxy-pjqxb\" (UID: \"2933feb7-6787-48ee-a2f0-b8465a54af1a\") " pod="kube-system/kube-proxy-pjqxb" Mar 17 18:41:38.446037 kubelet[1951]: I0317 18:41:38.445779 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-lib-modules\") pod \"cilium-db9z2\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " pod="kube-system/cilium-db9z2" Mar 17 18:41:38.446037 kubelet[1951]: I0317 18:41:38.445799 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-xtables-lock\") pod \"cilium-db9z2\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " pod="kube-system/cilium-db9z2" Mar 17 18:41:38.446573 kubelet[1951]: I0317 18:41:38.445822 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-host-proc-sys-kernel\") pod \"cilium-db9z2\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " pod="kube-system/cilium-db9z2" Mar 17 18:41:38.446573 kubelet[1951]: I0317 18:41:38.445844 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d95c88c-1ae9-4400-ad3e-55e2633737f0-hubble-tls\") pod \"cilium-db9z2\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " pod="kube-system/cilium-db9z2" Mar 17 18:41:38.446573 kubelet[1951]: I0317 18:41:38.445865 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5qr5\" (UniqueName: \"kubernetes.io/projected/6d95c88c-1ae9-4400-ad3e-55e2633737f0-kube-api-access-q5qr5\") pod \"cilium-db9z2\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " pod="kube-system/cilium-db9z2" Mar 17 18:41:38.446573 kubelet[1951]: I0317 18:41:38.445923 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2933feb7-6787-48ee-a2f0-b8465a54af1a-xtables-lock\") pod \"kube-proxy-pjqxb\" (UID: \"2933feb7-6787-48ee-a2f0-b8465a54af1a\") " pod="kube-system/kube-proxy-pjqxb" Mar 17 18:41:38.446573 kubelet[1951]: I0317 18:41:38.445951 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-bpf-maps\") pod \"cilium-db9z2\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " pod="kube-system/cilium-db9z2" Mar 17 18:41:38.446573 kubelet[1951]: I0317 18:41:38.445978 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-hostproc\") pod \"cilium-db9z2\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " pod="kube-system/cilium-db9z2" Mar 17 18:41:38.446942 kubelet[1951]: I0317 18:41:38.446002 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d95c88c-1ae9-4400-ad3e-55e2633737f0-clustermesh-secrets\") pod \"cilium-db9z2\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " pod="kube-system/cilium-db9z2" Mar 17 18:41:38.446942 kubelet[1951]: I0317 18:41:38.446029 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d248573-527e-479a-8562-24efc3702407-cilium-config-path\") pod \"cilium-operator-5d85765b45-lwsj5\" (UID: \"9d248573-527e-479a-8562-24efc3702407\") " pod="kube-system/cilium-operator-5d85765b45-lwsj5" Mar 17 18:41:38.446942 kubelet[1951]: I0317 18:41:38.446061 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts5v2\" (UniqueName: \"kubernetes.io/projected/9d248573-527e-479a-8562-24efc3702407-kube-api-access-ts5v2\") pod \"cilium-operator-5d85765b45-lwsj5\" (UID: \"9d248573-527e-479a-8562-24efc3702407\") " pod="kube-system/cilium-operator-5d85765b45-lwsj5" Mar 17 18:41:38.446942 kubelet[1951]: I0317 18:41:38.446090 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b8fw\" (UniqueName: \"kubernetes.io/projected/2933feb7-6787-48ee-a2f0-b8465a54af1a-kube-api-access-9b8fw\") pod \"kube-proxy-pjqxb\" (UID: \"2933feb7-6787-48ee-a2f0-b8465a54af1a\") " pod="kube-system/kube-proxy-pjqxb" Mar 17 18:41:38.446942 kubelet[1951]: I0317 18:41:38.446114 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-cilium-run\") pod \"cilium-db9z2\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " pod="kube-system/cilium-db9z2" Mar 17 18:41:38.579959 kubelet[1951]: E0317 18:41:38.579870 1951 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[clustermesh-secrets hubble-tls kube-api-access-q5qr5], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-db9z2" podUID="6d95c88c-1ae9-4400-ad3e-55e2633737f0" Mar 17 18:41:39.251436 kubelet[1951]: I0317 18:41:39.251326 1951 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 17 18:41:39.553092 kubelet[1951]: E0317 18:41:39.552947 1951 secret.go:188] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Mar 17 18:41:39.553577 kubelet[1951]: E0317 18:41:39.553549 1951 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:41:39.554750 kubelet[1951]: E0317 18:41:39.554691 1951 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d95c88c-1ae9-4400-ad3e-55e2633737f0-clustermesh-secrets podName:6d95c88c-1ae9-4400-ad3e-55e2633737f0 nodeName:}" failed. No retries permitted until 2025-03-17 18:41:40.053383522 +0000 UTC m=+5.956959579 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/6d95c88c-1ae9-4400-ad3e-55e2633737f0-clustermesh-secrets") pod "cilium-db9z2" (UID: "6d95c88c-1ae9-4400-ad3e-55e2633737f0") : failed to sync secret cache: timed out waiting for the condition Mar 17 18:41:39.555050 kubelet[1951]: E0317 18:41:39.555027 1951 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2933feb7-6787-48ee-a2f0-b8465a54af1a-kube-proxy podName:2933feb7-6787-48ee-a2f0-b8465a54af1a nodeName:}" failed. No retries permitted until 2025-03-17 18:41:40.054986465 +0000 UTC m=+5.958562506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/2933feb7-6787-48ee-a2f0-b8465a54af1a-kube-proxy") pod "kube-proxy-pjqxb" (UID: "2933feb7-6787-48ee-a2f0-b8465a54af1a") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:41:39.668078 kubelet[1951]: I0317 18:41:39.668006 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-host-proc-sys-kernel\") pod \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " Mar 17 18:41:39.669031 kubelet[1951]: I0317 18:41:39.668966 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d95c88c-1ae9-4400-ad3e-55e2633737f0-hubble-tls\") pod \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " Mar 17 18:41:39.669771 kubelet[1951]: I0317 18:41:39.669738 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-etc-cni-netd\") pod \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " Mar 17 18:41:39.670060 kubelet[1951]: I0317 18:41:39.669948 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-xtables-lock\") pod \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " Mar 17 18:41:39.674733 kubelet[1951]: I0317 18:41:39.674576 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-cni-path\") pod \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " Mar 17 18:41:39.674733 kubelet[1951]: I0317 18:41:39.674646 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-bpf-maps\") pod \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " Mar 17 18:41:39.675302 kubelet[1951]: I0317 18:41:39.674805 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-cilium-cgroup\") pod \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " Mar 17 18:41:39.675302 kubelet[1951]: I0317 18:41:39.674854 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d95c88c-1ae9-4400-ad3e-55e2633737f0-cilium-config-path\") pod \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " Mar 17 18:41:39.675302 kubelet[1951]: I0317 18:41:39.674877 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-lib-modules\") pod \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " Mar 17 18:41:39.675302 kubelet[1951]: I0317 18:41:39.674908 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-cilium-run\") pod \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " Mar 17 18:41:39.675302 kubelet[1951]: I0317 18:41:39.675010 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-host-proc-sys-net\") pod \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " Mar 17 18:41:39.675302 kubelet[1951]: I0317 18:41:39.675044 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-hostproc\") pod \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " Mar 17 18:41:39.675816 kubelet[1951]: I0317 18:41:39.668220 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6d95c88c-1ae9-4400-ad3e-55e2633737f0" (UID: "6d95c88c-1ae9-4400-ad3e-55e2633737f0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:41:39.675816 kubelet[1951]: I0317 18:41:39.669857 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6d95c88c-1ae9-4400-ad3e-55e2633737f0" (UID: "6d95c88c-1ae9-4400-ad3e-55e2633737f0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:41:39.675816 kubelet[1951]: I0317 18:41:39.670126 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6d95c88c-1ae9-4400-ad3e-55e2633737f0" (UID: "6d95c88c-1ae9-4400-ad3e-55e2633737f0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:41:39.675816 kubelet[1951]: I0317 18:41:39.675416 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-hostproc" (OuterVolumeSpecName: "hostproc") pod "6d95c88c-1ae9-4400-ad3e-55e2633737f0" (UID: "6d95c88c-1ae9-4400-ad3e-55e2633737f0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:41:39.675816 kubelet[1951]: I0317 18:41:39.675592 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-cni-path" (OuterVolumeSpecName: "cni-path") pod "6d95c88c-1ae9-4400-ad3e-55e2633737f0" (UID: "6d95c88c-1ae9-4400-ad3e-55e2633737f0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:41:39.676471 kubelet[1951]: I0317 18:41:39.675639 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6d95c88c-1ae9-4400-ad3e-55e2633737f0" (UID: "6d95c88c-1ae9-4400-ad3e-55e2633737f0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:41:39.676471 kubelet[1951]: I0317 18:41:39.675664 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6d95c88c-1ae9-4400-ad3e-55e2633737f0" (UID: "6d95c88c-1ae9-4400-ad3e-55e2633737f0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:41:39.686030 kubelet[1951]: I0317 18:41:39.685967 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6d95c88c-1ae9-4400-ad3e-55e2633737f0" (UID: "6d95c88c-1ae9-4400-ad3e-55e2633737f0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:41:39.686648 kubelet[1951]: I0317 18:41:39.686595 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6d95c88c-1ae9-4400-ad3e-55e2633737f0" (UID: "6d95c88c-1ae9-4400-ad3e-55e2633737f0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:41:39.686815 kubelet[1951]: I0317 18:41:39.686797 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6d95c88c-1ae9-4400-ad3e-55e2633737f0" (UID: "6d95c88c-1ae9-4400-ad3e-55e2633737f0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:41:39.688628 kubelet[1951]: I0317 18:41:39.687889 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d95c88c-1ae9-4400-ad3e-55e2633737f0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6d95c88c-1ae9-4400-ad3e-55e2633737f0" (UID: "6d95c88c-1ae9-4400-ad3e-55e2633737f0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:41:39.705205 systemd[1]: var-lib-kubelet-pods-6d95c88c\x2d1ae9\x2d4400\x2dad3e\x2d55e2633737f0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:41:39.708780 kubelet[1951]: I0317 18:41:39.708624 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d95c88c-1ae9-4400-ad3e-55e2633737f0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6d95c88c-1ae9-4400-ad3e-55e2633737f0" (UID: "6d95c88c-1ae9-4400-ad3e-55e2633737f0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:41:39.777005 kubelet[1951]: I0317 18:41:39.775886 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5qr5\" (UniqueName: \"kubernetes.io/projected/6d95c88c-1ae9-4400-ad3e-55e2633737f0-kube-api-access-q5qr5\") pod \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " Mar 17 18:41:39.777902 kubelet[1951]: I0317 18:41:39.777854 1951 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-etc-cni-netd\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:41:39.778209 kubelet[1951]: I0317 18:41:39.778187 1951 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-xtables-lock\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:41:39.778614 kubelet[1951]: I0317 18:41:39.778590 1951 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-cni-path\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:41:39.779058 kubelet[1951]: I0317 18:41:39.779025 1951 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-bpf-maps\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:41:39.779256 kubelet[1951]: I0317 18:41:39.779236 1951 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-cilium-cgroup\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:41:39.779401 kubelet[1951]: I0317 18:41:39.779382 1951 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d95c88c-1ae9-4400-ad3e-55e2633737f0-cilium-config-path\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:41:39.779859 kubelet[1951]: I0317 18:41:39.779828 1951 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-lib-modules\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:41:39.780014 kubelet[1951]: I0317 18:41:39.779997 1951 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-cilium-run\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:41:39.780384 kubelet[1951]: I0317 18:41:39.780359 1951 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-host-proc-sys-net\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:41:39.780643 kubelet[1951]: I0317 18:41:39.780615 1951 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-hostproc\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:41:39.781017 kubelet[1951]: I0317 18:41:39.780993 1951 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d95c88c-1ae9-4400-ad3e-55e2633737f0-host-proc-sys-kernel\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:41:39.781200 kubelet[1951]: I0317 18:41:39.781179 1951 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d95c88c-1ae9-4400-ad3e-55e2633737f0-hubble-tls\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:41:39.781578 kubelet[1951]: I0317 18:41:39.781024 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d95c88c-1ae9-4400-ad3e-55e2633737f0-kube-api-access-q5qr5" (OuterVolumeSpecName: "kube-api-access-q5qr5") pod "6d95c88c-1ae9-4400-ad3e-55e2633737f0" (UID: "6d95c88c-1ae9-4400-ad3e-55e2633737f0"). InnerVolumeSpecName "kube-api-access-q5qr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:41:39.881882 kubelet[1951]: I0317 18:41:39.881700 1951 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-q5qr5\" (UniqueName: \"kubernetes.io/projected/6d95c88c-1ae9-4400-ad3e-55e2633737f0-kube-api-access-q5qr5\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:41:39.919447 kubelet[1951]: E0317 18:41:39.917386 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:39.919728 env[1205]: time="2025-03-17T18:41:39.918627369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-lwsj5,Uid:9d248573-527e-479a-8562-24efc3702407,Namespace:kube-system,Attempt:0,}" Mar 17 18:41:39.964836 env[1205]: time="2025-03-17T18:41:39.964686348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:41:39.964836 env[1205]: time="2025-03-17T18:41:39.964767189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:41:39.965280 env[1205]: time="2025-03-17T18:41:39.964812858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:41:39.967011 env[1205]: time="2025-03-17T18:41:39.965695489Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/92867cd1c69d2b9669797580d2a16b68a8f3675f8ef6f3572b9b3ee18e418181 pid=2017 runtime=io.containerd.runc.v2 Mar 17 18:41:40.012717 systemd[1]: Started cri-containerd-92867cd1c69d2b9669797580d2a16b68a8f3675f8ef6f3572b9b3ee18e418181.scope. Mar 17 18:41:40.155690 kubelet[1951]: E0317 18:41:40.155240 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:40.158868 env[1205]: time="2025-03-17T18:41:40.158806065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pjqxb,Uid:2933feb7-6787-48ee-a2f0-b8465a54af1a,Namespace:kube-system,Attempt:0,}" Mar 17 18:41:40.162823 env[1205]: time="2025-03-17T18:41:40.162768092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-lwsj5,Uid:9d248573-527e-479a-8562-24efc3702407,Namespace:kube-system,Attempt:0,} returns sandbox id \"92867cd1c69d2b9669797580d2a16b68a8f3675f8ef6f3572b9b3ee18e418181\"" Mar 17 18:41:40.164537 kubelet[1951]: E0317 18:41:40.164411 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:40.174422 env[1205]: time="2025-03-17T18:41:40.174352894Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:41:40.192921 kubelet[1951]: I0317 18:41:40.192864 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d95c88c-1ae9-4400-ad3e-55e2633737f0-clustermesh-secrets\") pod \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\" (UID: \"6d95c88c-1ae9-4400-ad3e-55e2633737f0\") " Mar 17 18:41:40.200452 kubelet[1951]: I0317 18:41:40.200375 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d95c88c-1ae9-4400-ad3e-55e2633737f0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6d95c88c-1ae9-4400-ad3e-55e2633737f0" (UID: "6d95c88c-1ae9-4400-ad3e-55e2633737f0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:41:40.228692 env[1205]: time="2025-03-17T18:41:40.228342234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:41:40.228985 env[1205]: time="2025-03-17T18:41:40.228709441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:41:40.228985 env[1205]: time="2025-03-17T18:41:40.228758646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:41:40.229410 env[1205]: time="2025-03-17T18:41:40.229314977Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e3f50343a4989386e202e7176731f5af2aee883a1cac95b2b57e0e8622917320 pid=2061 runtime=io.containerd.runc.v2 Mar 17 18:41:40.272001 systemd[1]: var-lib-kubelet-pods-6d95c88c\x2d1ae9\x2d4400\x2dad3e\x2d55e2633737f0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq5qr5.mount: Deactivated successfully. Mar 17 18:41:40.294520 kubelet[1951]: I0317 18:41:40.294421 1951 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d95c88c-1ae9-4400-ad3e-55e2633737f0-clustermesh-secrets\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:41:40.298553 systemd[1]: run-containerd-runc-k8s.io-e3f50343a4989386e202e7176731f5af2aee883a1cac95b2b57e0e8622917320-runc.EyBqa6.mount: Deactivated successfully. Mar 17 18:41:40.329986 systemd[1]: Started cri-containerd-e3f50343a4989386e202e7176731f5af2aee883a1cac95b2b57e0e8622917320.scope. Mar 17 18:41:40.436770 env[1205]: time="2025-03-17T18:41:40.436536501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pjqxb,Uid:2933feb7-6787-48ee-a2f0-b8465a54af1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3f50343a4989386e202e7176731f5af2aee883a1cac95b2b57e0e8622917320\"" Mar 17 18:41:40.445453 kubelet[1951]: E0317 18:41:40.441393 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:40.453884 env[1205]: time="2025-03-17T18:41:40.453740856Z" level=info msg="CreateContainer within sandbox \"e3f50343a4989386e202e7176731f5af2aee883a1cac95b2b57e0e8622917320\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:41:40.489718 systemd[1]: Removed slice kubepods-burstable-pod6d95c88c_1ae9_4400_ad3e_55e2633737f0.slice. Mar 17 18:41:40.526442 env[1205]: time="2025-03-17T18:41:40.526049748Z" level=info msg="CreateContainer within sandbox \"e3f50343a4989386e202e7176731f5af2aee883a1cac95b2b57e0e8622917320\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b6258313924f3557d33e55a4f82d6837e6a753dd2307874a0bb9100ad929079c\"" Mar 17 18:41:40.530244 env[1205]: time="2025-03-17T18:41:40.527324450Z" level=info msg="StartContainer for \"b6258313924f3557d33e55a4f82d6837e6a753dd2307874a0bb9100ad929079c\"" Mar 17 18:41:40.655177 systemd[1]: Started cri-containerd-b6258313924f3557d33e55a4f82d6837e6a753dd2307874a0bb9100ad929079c.scope. Mar 17 18:41:40.775206 systemd[1]: Created slice kubepods-burstable-pod3bc86c48_db3f_499b_b59a_11a003b1c9d1.slice. Mar 17 18:41:40.830665 env[1205]: time="2025-03-17T18:41:40.830590791Z" level=info msg="StartContainer for \"b6258313924f3557d33e55a4f82d6837e6a753dd2307874a0bb9100ad929079c\" returns successfully" Mar 17 18:41:40.935226 kubelet[1951]: I0317 18:41:40.933819 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-bpf-maps\") pod \"cilium-4xt55\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " pod="kube-system/cilium-4xt55" Mar 17 18:41:40.935226 kubelet[1951]: I0317 18:41:40.933894 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bc86c48-db3f-499b-b59a-11a003b1c9d1-cilium-config-path\") pod \"cilium-4xt55\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " pod="kube-system/cilium-4xt55" Mar 17 18:41:40.935226 kubelet[1951]: I0317 18:41:40.933924 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-cilium-cgroup\") pod \"cilium-4xt55\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " pod="kube-system/cilium-4xt55" Mar 17 18:41:40.935226 kubelet[1951]: I0317 18:41:40.933947 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3bc86c48-db3f-499b-b59a-11a003b1c9d1-hubble-tls\") pod \"cilium-4xt55\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " pod="kube-system/cilium-4xt55" Mar 17 18:41:40.935226 kubelet[1951]: I0317 18:41:40.933987 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3bc86c48-db3f-499b-b59a-11a003b1c9d1-clustermesh-secrets\") pod \"cilium-4xt55\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " pod="kube-system/cilium-4xt55" Mar 17 18:41:40.935226 kubelet[1951]: I0317 18:41:40.934010 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdslj\" (UniqueName: \"kubernetes.io/projected/3bc86c48-db3f-499b-b59a-11a003b1c9d1-kube-api-access-gdslj\") pod \"cilium-4xt55\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " pod="kube-system/cilium-4xt55" Mar 17 18:41:40.936262 kubelet[1951]: I0317 18:41:40.934042 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-cilium-run\") pod \"cilium-4xt55\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " pod="kube-system/cilium-4xt55" Mar 17 18:41:40.936262 kubelet[1951]: I0317 18:41:40.934064 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-cni-path\") pod \"cilium-4xt55\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " pod="kube-system/cilium-4xt55" Mar 17 18:41:40.936262 kubelet[1951]: I0317 18:41:40.934090 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-lib-modules\") pod \"cilium-4xt55\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " pod="kube-system/cilium-4xt55" Mar 17 18:41:40.936262 kubelet[1951]: I0317 18:41:40.934118 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-xtables-lock\") pod \"cilium-4xt55\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " pod="kube-system/cilium-4xt55" Mar 17 18:41:40.936262 kubelet[1951]: I0317 18:41:40.934163 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-host-proc-sys-kernel\") pod \"cilium-4xt55\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " pod="kube-system/cilium-4xt55" Mar 17 18:41:40.936262 kubelet[1951]: I0317 18:41:40.934187 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-hostproc\") pod \"cilium-4xt55\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " pod="kube-system/cilium-4xt55" Mar 17 18:41:40.936545 kubelet[1951]: I0317 18:41:40.934214 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-etc-cni-netd\") pod \"cilium-4xt55\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " pod="kube-system/cilium-4xt55" Mar 17 18:41:40.936545 kubelet[1951]: I0317 18:41:40.934237 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-host-proc-sys-net\") pod \"cilium-4xt55\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " pod="kube-system/cilium-4xt55" Mar 17 18:41:41.088340 kubelet[1951]: E0317 18:41:41.085225 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:41.088710 env[1205]: time="2025-03-17T18:41:41.085965920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4xt55,Uid:3bc86c48-db3f-499b-b59a-11a003b1c9d1,Namespace:kube-system,Attempt:0,}" Mar 17 18:41:41.121498 env[1205]: time="2025-03-17T18:41:41.118658712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:41:41.121498 env[1205]: time="2025-03-17T18:41:41.118746103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:41:41.121498 env[1205]: time="2025-03-17T18:41:41.118766084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:41:41.122244 env[1205]: time="2025-03-17T18:41:41.122153083Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5 pid=2140 runtime=io.containerd.runc.v2 Mar 17 18:41:41.153116 systemd[1]: Started cri-containerd-6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5.scope. Mar 17 18:41:41.247199 env[1205]: time="2025-03-17T18:41:41.247115236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4xt55,Uid:3bc86c48-db3f-499b-b59a-11a003b1c9d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\"" Mar 17 18:41:41.249045 kubelet[1951]: E0317 18:41:41.248982 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:41.272859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3256544511.mount: Deactivated successfully. Mar 17 18:41:41.575789 kubelet[1951]: E0317 18:41:41.575733 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:42.030914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2840258654.mount: Deactivated successfully. Mar 17 18:41:42.477098 kubelet[1951]: I0317 18:41:42.477044 1951 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d95c88c-1ae9-4400-ad3e-55e2633737f0" path="/var/lib/kubelet/pods/6d95c88c-1ae9-4400-ad3e-55e2633737f0/volumes" Mar 17 18:41:42.601920 kubelet[1951]: E0317 18:41:42.601033 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:43.958242 env[1205]: time="2025-03-17T18:41:43.958169300Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:43.966426 env[1205]: time="2025-03-17T18:41:43.964554568Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:43.983237 env[1205]: time="2025-03-17T18:41:43.983163612Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:43.985160 env[1205]: time="2025-03-17T18:41:43.984276773Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 18:41:43.994256 env[1205]: time="2025-03-17T18:41:43.993298233Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:41:43.998432 env[1205]: time="2025-03-17T18:41:43.998340346Z" level=info msg="CreateContainer within sandbox \"92867cd1c69d2b9669797580d2a16b68a8f3675f8ef6f3572b9b3ee18e418181\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:41:44.010261 kubelet[1951]: E0317 18:41:44.010193 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:44.060226 env[1205]: time="2025-03-17T18:41:44.059662854Z" level=info msg="CreateContainer within sandbox \"92867cd1c69d2b9669797580d2a16b68a8f3675f8ef6f3572b9b3ee18e418181\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a\"" Mar 17 18:41:44.061959 env[1205]: time="2025-03-17T18:41:44.061449360Z" level=info msg="StartContainer for \"a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a\"" Mar 17 18:41:44.076017 kubelet[1951]: I0317 18:41:44.071656 1951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pjqxb" podStartSLOduration=6.071626163 podStartE2EDuration="6.071626163s" podCreationTimestamp="2025-03-17 18:41:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:41:41.637549793 +0000 UTC m=+7.541125855" watchObservedRunningTime="2025-03-17 18:41:44.071626163 +0000 UTC m=+9.975202230" Mar 17 18:41:44.178080 systemd[1]: run-containerd-runc-k8s.io-a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a-runc.8XgEv5.mount: Deactivated successfully. Mar 17 18:41:44.204994 systemd[1]: Started cri-containerd-a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a.scope. Mar 17 18:41:44.348485 env[1205]: time="2025-03-17T18:41:44.348314348Z" level=info msg="StartContainer for \"a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a\" returns successfully" Mar 17 18:41:44.611548 kubelet[1951]: E0317 18:41:44.611328 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:44.618249 kubelet[1951]: E0317 18:41:44.618194 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:44.714264 kubelet[1951]: I0317 18:41:44.711944 1951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-lwsj5" podStartSLOduration=2.8983104920000002 podStartE2EDuration="6.711911896s" podCreationTimestamp="2025-03-17 18:41:38 +0000 UTC" firstStartedPulling="2025-03-17 18:41:40.173066243 +0000 UTC m=+6.076642298" lastFinishedPulling="2025-03-17 18:41:43.986667657 +0000 UTC m=+9.890243702" observedRunningTime="2025-03-17 18:41:44.709994154 +0000 UTC m=+10.613570217" watchObservedRunningTime="2025-03-17 18:41:44.711911896 +0000 UTC m=+10.615487957" Mar 17 18:41:45.618736 kubelet[1951]: E0317 18:41:45.618679 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:52.213450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2710465532.mount: Deactivated successfully. Mar 17 18:41:57.128937 env[1205]: time="2025-03-17T18:41:57.128845458Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:57.136298 env[1205]: time="2025-03-17T18:41:57.132786690Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:57.158451 env[1205]: time="2025-03-17T18:41:57.158386014Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:57.161636 env[1205]: time="2025-03-17T18:41:57.161550317Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 18:41:57.170124 env[1205]: time="2025-03-17T18:41:57.170055288Z" level=info msg="CreateContainer within sandbox \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:41:57.198576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3325122757.mount: Deactivated successfully. Mar 17 18:41:57.215809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4018289619.mount: Deactivated successfully. Mar 17 18:41:57.231123 env[1205]: time="2025-03-17T18:41:57.231039175Z" level=info msg="CreateContainer within sandbox \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6486801a5b7a81b7acb069f6f8e81c4d2872dba9ba2044b58109618b7092b562\"" Mar 17 18:41:57.234314 env[1205]: time="2025-03-17T18:41:57.234233274Z" level=info msg="StartContainer for \"6486801a5b7a81b7acb069f6f8e81c4d2872dba9ba2044b58109618b7092b562\"" Mar 17 18:41:57.294049 systemd[1]: Started cri-containerd-6486801a5b7a81b7acb069f6f8e81c4d2872dba9ba2044b58109618b7092b562.scope. Mar 17 18:41:57.401362 env[1205]: time="2025-03-17T18:41:57.400421346Z" level=info msg="StartContainer for \"6486801a5b7a81b7acb069f6f8e81c4d2872dba9ba2044b58109618b7092b562\" returns successfully" Mar 17 18:41:57.421613 systemd[1]: cri-containerd-6486801a5b7a81b7acb069f6f8e81c4d2872dba9ba2044b58109618b7092b562.scope: Deactivated successfully. Mar 17 18:41:57.506392 env[1205]: time="2025-03-17T18:41:57.506311547Z" level=info msg="shim disconnected" id=6486801a5b7a81b7acb069f6f8e81c4d2872dba9ba2044b58109618b7092b562 Mar 17 18:41:57.506392 env[1205]: time="2025-03-17T18:41:57.506397213Z" level=warning msg="cleaning up after shim disconnected" id=6486801a5b7a81b7acb069f6f8e81c4d2872dba9ba2044b58109618b7092b562 namespace=k8s.io Mar 17 18:41:57.506784 env[1205]: time="2025-03-17T18:41:57.506414138Z" level=info msg="cleaning up dead shim" Mar 17 18:41:57.540845 env[1205]: time="2025-03-17T18:41:57.539880048Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:41:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2398 runtime=io.containerd.runc.v2\n" Mar 17 18:41:57.700253 kubelet[1951]: E0317 18:41:57.699294 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:57.719052 env[1205]: time="2025-03-17T18:41:57.717611152Z" level=info msg="CreateContainer within sandbox \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:41:57.754712 env[1205]: time="2025-03-17T18:41:57.754322171Z" level=info msg="CreateContainer within sandbox \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e8c3d7ea3c997f574a493002efe148066c724932e0aa73418474e818fb006ca7\"" Mar 17 18:41:57.755841 env[1205]: time="2025-03-17T18:41:57.755770379Z" level=info msg="StartContainer for \"e8c3d7ea3c997f574a493002efe148066c724932e0aa73418474e818fb006ca7\"" Mar 17 18:41:57.835517 systemd[1]: Started cri-containerd-e8c3d7ea3c997f574a493002efe148066c724932e0aa73418474e818fb006ca7.scope. Mar 17 18:41:57.955453 env[1205]: time="2025-03-17T18:41:57.952920135Z" level=info msg="StartContainer for \"e8c3d7ea3c997f574a493002efe148066c724932e0aa73418474e818fb006ca7\" returns successfully" Mar 17 18:41:57.978586 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:41:57.979024 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:41:57.979844 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:41:57.983750 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:41:57.991600 systemd[1]: cri-containerd-e8c3d7ea3c997f574a493002efe148066c724932e0aa73418474e818fb006ca7.scope: Deactivated successfully. Mar 17 18:41:58.036983 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:41:58.052985 env[1205]: time="2025-03-17T18:41:58.052909492Z" level=info msg="shim disconnected" id=e8c3d7ea3c997f574a493002efe148066c724932e0aa73418474e818fb006ca7 Mar 17 18:41:58.052985 env[1205]: time="2025-03-17T18:41:58.052982434Z" level=warning msg="cleaning up after shim disconnected" id=e8c3d7ea3c997f574a493002efe148066c724932e0aa73418474e818fb006ca7 namespace=k8s.io Mar 17 18:41:58.053455 env[1205]: time="2025-03-17T18:41:58.052999262Z" level=info msg="cleaning up dead shim" Mar 17 18:41:58.073821 env[1205]: time="2025-03-17T18:41:58.073747966Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:41:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2462 runtime=io.containerd.runc.v2\n" Mar 17 18:41:58.188619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6486801a5b7a81b7acb069f6f8e81c4d2872dba9ba2044b58109618b7092b562-rootfs.mount: Deactivated successfully. Mar 17 18:41:58.712119 kubelet[1951]: E0317 18:41:58.712079 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:58.725682 env[1205]: time="2025-03-17T18:41:58.725437931Z" level=info msg="CreateContainer within sandbox \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:41:58.758091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1359659697.mount: Deactivated successfully. Mar 17 18:41:58.768489 env[1205]: time="2025-03-17T18:41:58.768404530Z" level=info msg="CreateContainer within sandbox \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f370d50b5adc3036ef0fd1e016f4622c7666261b77eee1b4709874d6a3fe7fbe\"" Mar 17 18:41:58.771923 env[1205]: time="2025-03-17T18:41:58.770233514Z" level=info msg="StartContainer for \"f370d50b5adc3036ef0fd1e016f4622c7666261b77eee1b4709874d6a3fe7fbe\"" Mar 17 18:41:58.827495 systemd[1]: Started cri-containerd-f370d50b5adc3036ef0fd1e016f4622c7666261b77eee1b4709874d6a3fe7fbe.scope. Mar 17 18:41:58.903301 env[1205]: time="2025-03-17T18:41:58.903044966Z" level=info msg="StartContainer for \"f370d50b5adc3036ef0fd1e016f4622c7666261b77eee1b4709874d6a3fe7fbe\" returns successfully" Mar 17 18:41:58.915756 systemd[1]: cri-containerd-f370d50b5adc3036ef0fd1e016f4622c7666261b77eee1b4709874d6a3fe7fbe.scope: Deactivated successfully. Mar 17 18:41:58.970334 env[1205]: time="2025-03-17T18:41:58.969308043Z" level=info msg="shim disconnected" id=f370d50b5adc3036ef0fd1e016f4622c7666261b77eee1b4709874d6a3fe7fbe Mar 17 18:41:58.970334 env[1205]: time="2025-03-17T18:41:58.969392317Z" level=warning msg="cleaning up after shim disconnected" id=f370d50b5adc3036ef0fd1e016f4622c7666261b77eee1b4709874d6a3fe7fbe namespace=k8s.io Mar 17 18:41:58.970334 env[1205]: time="2025-03-17T18:41:58.969408547Z" level=info msg="cleaning up dead shim" Mar 17 18:41:58.985798 env[1205]: time="2025-03-17T18:41:58.985731967Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:41:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2520 runtime=io.containerd.runc.v2\n" Mar 17 18:41:59.186892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f370d50b5adc3036ef0fd1e016f4622c7666261b77eee1b4709874d6a3fe7fbe-rootfs.mount: Deactivated successfully. Mar 17 18:41:59.733324 kubelet[1951]: E0317 18:41:59.733272 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:41:59.742651 env[1205]: time="2025-03-17T18:41:59.742589241Z" level=info msg="CreateContainer within sandbox \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:41:59.782181 env[1205]: time="2025-03-17T18:41:59.781862757Z" level=info msg="CreateContainer within sandbox \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"804db83b3f6fcaf6ba243b77d33ed919f93199a7d948c6682c084827d7ee5d63\"" Mar 17 18:41:59.787361 env[1205]: time="2025-03-17T18:41:59.787301555Z" level=info msg="StartContainer for \"804db83b3f6fcaf6ba243b77d33ed919f93199a7d948c6682c084827d7ee5d63\"" Mar 17 18:41:59.874251 systemd[1]: Started cri-containerd-804db83b3f6fcaf6ba243b77d33ed919f93199a7d948c6682c084827d7ee5d63.scope. Mar 17 18:41:59.979743 systemd[1]: cri-containerd-804db83b3f6fcaf6ba243b77d33ed919f93199a7d948c6682c084827d7ee5d63.scope: Deactivated successfully. Mar 17 18:41:59.984404 env[1205]: time="2025-03-17T18:41:59.984184306Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bc86c48_db3f_499b_b59a_11a003b1c9d1.slice/cri-containerd-804db83b3f6fcaf6ba243b77d33ed919f93199a7d948c6682c084827d7ee5d63.scope/memory.events\": no such file or directory" Mar 17 18:41:59.988192 env[1205]: time="2025-03-17T18:41:59.988089540Z" level=info msg="StartContainer for \"804db83b3f6fcaf6ba243b77d33ed919f93199a7d948c6682c084827d7ee5d63\" returns successfully" Mar 17 18:42:00.062640 env[1205]: time="2025-03-17T18:42:00.062394596Z" level=info msg="shim disconnected" id=804db83b3f6fcaf6ba243b77d33ed919f93199a7d948c6682c084827d7ee5d63 Mar 17 18:42:00.063334 env[1205]: time="2025-03-17T18:42:00.063280335Z" level=warning msg="cleaning up after shim disconnected" id=804db83b3f6fcaf6ba243b77d33ed919f93199a7d948c6682c084827d7ee5d63 namespace=k8s.io Mar 17 18:42:00.063849 env[1205]: time="2025-03-17T18:42:00.063819366Z" level=info msg="cleaning up dead shim" Mar 17 18:42:00.088212 env[1205]: time="2025-03-17T18:42:00.088121268Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:42:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2577 runtime=io.containerd.runc.v2\n" Mar 17 18:42:00.187532 systemd[1]: run-containerd-runc-k8s.io-804db83b3f6fcaf6ba243b77d33ed919f93199a7d948c6682c084827d7ee5d63-runc.u8szTI.mount: Deactivated successfully. Mar 17 18:42:00.187689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-804db83b3f6fcaf6ba243b77d33ed919f93199a7d948c6682c084827d7ee5d63-rootfs.mount: Deactivated successfully. Mar 17 18:42:00.755524 kubelet[1951]: E0317 18:42:00.751281 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:42:00.767223 env[1205]: time="2025-03-17T18:42:00.767134921Z" level=info msg="CreateContainer within sandbox \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:42:00.832583 env[1205]: time="2025-03-17T18:42:00.832509892Z" level=info msg="CreateContainer within sandbox \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2\"" Mar 17 18:42:00.833762 env[1205]: time="2025-03-17T18:42:00.833711614Z" level=info msg="StartContainer for \"5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2\"" Mar 17 18:42:00.843951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3763103662.mount: Deactivated successfully. Mar 17 18:42:00.878055 systemd[1]: Started cri-containerd-5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2.scope. Mar 17 18:42:00.975984 env[1205]: time="2025-03-17T18:42:00.975898887Z" level=info msg="StartContainer for \"5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2\" returns successfully" Mar 17 18:42:01.218887 kubelet[1951]: I0317 18:42:01.218809 1951 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 18:42:01.306289 systemd[1]: Created slice kubepods-burstable-pod3b95f177_a2e8_4e4a_a414_ebe9f78a0873.slice. Mar 17 18:42:01.315136 kubelet[1951]: I0317 18:42:01.315033 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b95f177-a2e8-4e4a-a414-ebe9f78a0873-config-volume\") pod \"coredns-6f6b679f8f-h5qxr\" (UID: \"3b95f177-a2e8-4e4a-a414-ebe9f78a0873\") " pod="kube-system/coredns-6f6b679f8f-h5qxr" Mar 17 18:42:01.315387 kubelet[1951]: I0317 18:42:01.315261 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-756mn\" (UniqueName: \"kubernetes.io/projected/3b95f177-a2e8-4e4a-a414-ebe9f78a0873-kube-api-access-756mn\") pod \"coredns-6f6b679f8f-h5qxr\" (UID: \"3b95f177-a2e8-4e4a-a414-ebe9f78a0873\") " pod="kube-system/coredns-6f6b679f8f-h5qxr" Mar 17 18:42:01.330106 systemd[1]: Created slice kubepods-burstable-pod61c5c940_c986_4a3d_b80c_317d05630d81.slice. Mar 17 18:42:01.415936 kubelet[1951]: I0317 18:42:01.415856 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckwbn\" (UniqueName: \"kubernetes.io/projected/61c5c940-c986-4a3d-b80c-317d05630d81-kube-api-access-ckwbn\") pod \"coredns-6f6b679f8f-rbtk9\" (UID: \"61c5c940-c986-4a3d-b80c-317d05630d81\") " pod="kube-system/coredns-6f6b679f8f-rbtk9" Mar 17 18:42:01.416193 kubelet[1951]: I0317 18:42:01.415959 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61c5c940-c986-4a3d-b80c-317d05630d81-config-volume\") pod \"coredns-6f6b679f8f-rbtk9\" (UID: \"61c5c940-c986-4a3d-b80c-317d05630d81\") " pod="kube-system/coredns-6f6b679f8f-rbtk9" Mar 17 18:42:01.618057 kubelet[1951]: E0317 18:42:01.617899 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:42:01.620970 env[1205]: time="2025-03-17T18:42:01.620300576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-h5qxr,Uid:3b95f177-a2e8-4e4a-a414-ebe9f78a0873,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:01.642239 kubelet[1951]: E0317 18:42:01.641495 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:42:01.642871 env[1205]: time="2025-03-17T18:42:01.642476946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rbtk9,Uid:61c5c940-c986-4a3d-b80c-317d05630d81,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:01.767962 kubelet[1951]: E0317 18:42:01.765983 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:42:01.869842 kubelet[1951]: I0317 18:42:01.869555 1951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4xt55" podStartSLOduration=5.956439139 podStartE2EDuration="21.869510909s" podCreationTimestamp="2025-03-17 18:41:40 +0000 UTC" firstStartedPulling="2025-03-17 18:41:41.251032409 +0000 UTC m=+7.154608449" lastFinishedPulling="2025-03-17 18:41:57.164104162 +0000 UTC m=+23.067680219" observedRunningTime="2025-03-17 18:42:01.864821449 +0000 UTC m=+27.768397511" watchObservedRunningTime="2025-03-17 18:42:01.869510909 +0000 UTC m=+27.773086970" Mar 17 18:42:02.301643 systemd[1]: run-containerd-runc-k8s.io-5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2-runc.HYjoBk.mount: Deactivated successfully. Mar 17 18:42:02.770438 kubelet[1951]: E0317 18:42:02.768376 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:42:03.771042 kubelet[1951]: E0317 18:42:03.770699 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:42:04.687870 systemd[1]: run-containerd-runc-k8s.io-5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2-runc.mHYYpo.mount: Deactivated successfully. Mar 17 18:42:04.791908 systemd-networkd[1015]: cilium_host: Link UP Mar 17 18:42:04.798254 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 18:42:04.798445 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:42:04.795201 systemd-networkd[1015]: cilium_net: Link UP Mar 17 18:42:04.799226 systemd-networkd[1015]: cilium_net: Gained carrier Mar 17 18:42:04.799657 systemd-networkd[1015]: cilium_host: Gained carrier Mar 17 18:42:05.063367 systemd-networkd[1015]: cilium_host: Gained IPv6LL Mar 17 18:42:05.316780 systemd-networkd[1015]: cilium_vxlan: Link UP Mar 17 18:42:05.316792 systemd-networkd[1015]: cilium_vxlan: Gained carrier Mar 17 18:42:05.393371 systemd-networkd[1015]: cilium_net: Gained IPv6LL Mar 17 18:42:06.699193 kernel: NET: Registered PF_ALG protocol family Mar 17 18:42:06.733356 systemd-networkd[1015]: cilium_vxlan: Gained IPv6LL Mar 17 18:42:07.307781 systemd[1]: run-containerd-runc-k8s.io-5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2-runc.9lkuSd.mount: Deactivated successfully. Mar 17 18:42:09.276803 systemd-networkd[1015]: lxc_health: Link UP Mar 17 18:42:09.322635 systemd-networkd[1015]: lxc_health: Gained carrier Mar 17 18:42:09.323380 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:42:09.675125 systemd[1]: run-containerd-runc-k8s.io-5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2-runc.cMvxXt.mount: Deactivated successfully. Mar 17 18:42:10.340535 systemd-networkd[1015]: lxc306461e03d68: Link UP Mar 17 18:42:10.380278 kernel: eth0: renamed from tmp77f05 Mar 17 18:42:10.388198 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:42:10.388388 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc306461e03d68: link becomes ready Mar 17 18:42:10.388713 systemd-networkd[1015]: lxc306461e03d68: Gained carrier Mar 17 18:42:10.431612 systemd-networkd[1015]: lxc99219936e74a: Link UP Mar 17 18:42:10.450245 kernel: eth0: renamed from tmp8b390 Mar 17 18:42:10.455195 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc99219936e74a: link becomes ready Mar 17 18:42:10.455470 systemd-networkd[1015]: lxc99219936e74a: Gained carrier Mar 17 18:42:10.631753 systemd-networkd[1015]: lxc_health: Gained IPv6LL Mar 17 18:42:11.093839 kubelet[1951]: E0317 18:42:11.093633 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:42:11.656577 systemd-networkd[1015]: lxc306461e03d68: Gained IPv6LL Mar 17 18:42:11.817775 kubelet[1951]: E0317 18:42:11.817689 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:42:12.069350 systemd[1]: run-containerd-runc-k8s.io-5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2-runc.q3ipxe.mount: Deactivated successfully. Mar 17 18:42:12.167526 systemd-networkd[1015]: lxc99219936e74a: Gained IPv6LL Mar 17 18:42:12.821080 kubelet[1951]: E0317 18:42:12.821022 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:42:14.446014 systemd[1]: run-containerd-runc-k8s.io-5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2-runc.X808Xx.mount: Deactivated successfully. Mar 17 18:42:16.304483 sudo[1311]: pam_unix(sudo:session): session closed for user root Mar 17 18:42:16.323654 sshd[1308]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:16.336652 systemd[1]: sshd@4-134.199.210.114:22-139.178.68.195:55554.service: Deactivated successfully. Mar 17 18:42:16.337951 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:42:16.338230 systemd[1]: session-5.scope: Consumed 8.234s CPU time. Mar 17 18:42:16.339857 systemd-logind[1197]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:42:16.348078 systemd-logind[1197]: Removed session 5. Mar 17 18:42:19.824899 env[1205]: time="2025-03-17T18:42:19.824733220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:19.825565 env[1205]: time="2025-03-17T18:42:19.824916244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:19.825565 env[1205]: time="2025-03-17T18:42:19.824988073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:19.825565 env[1205]: time="2025-03-17T18:42:19.825315470Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b390a083f57eedaa0705aad428c763277f98df48cf28f2690ef02f0f587a602 pid=3261 runtime=io.containerd.runc.v2 Mar 17 18:42:19.870714 systemd[1]: Started cri-containerd-8b390a083f57eedaa0705aad428c763277f98df48cf28f2690ef02f0f587a602.scope. Mar 17 18:42:19.899454 systemd[1]: run-containerd-runc-k8s.io-8b390a083f57eedaa0705aad428c763277f98df48cf28f2690ef02f0f587a602-runc.tSXHDF.mount: Deactivated successfully. Mar 17 18:42:20.005083 env[1205]: time="2025-03-17T18:42:20.005013387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rbtk9,Uid:61c5c940-c986-4a3d-b80c-317d05630d81,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b390a083f57eedaa0705aad428c763277f98df48cf28f2690ef02f0f587a602\"" Mar 17 18:42:20.012876 kubelet[1951]: E0317 18:42:20.010080 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:42:20.022735 env[1205]: time="2025-03-17T18:42:20.022650989Z" level=info msg="CreateContainer within sandbox \"8b390a083f57eedaa0705aad428c763277f98df48cf28f2690ef02f0f587a602\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:42:20.080359 env[1205]: time="2025-03-17T18:42:20.080089184Z" level=info msg="CreateContainer within sandbox \"8b390a083f57eedaa0705aad428c763277f98df48cf28f2690ef02f0f587a602\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"04836bbfa63427967dfaec457ab244d8c715d717611a92d83ed4fe8a22fa5010\"" Mar 17 18:42:20.090004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3205281824.mount: Deactivated successfully. Mar 17 18:42:20.099109 env[1205]: time="2025-03-17T18:42:20.083174577Z" level=info msg="StartContainer for \"04836bbfa63427967dfaec457ab244d8c715d717611a92d83ed4fe8a22fa5010\"" Mar 17 18:42:20.176975 systemd[1]: Started cri-containerd-04836bbfa63427967dfaec457ab244d8c715d717611a92d83ed4fe8a22fa5010.scope. Mar 17 18:42:20.267510 env[1205]: time="2025-03-17T18:42:20.267415802Z" level=info msg="StartContainer for \"04836bbfa63427967dfaec457ab244d8c715d717611a92d83ed4fe8a22fa5010\" returns successfully" Mar 17 18:42:20.350781 env[1205]: time="2025-03-17T18:42:20.350331403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:20.350781 env[1205]: time="2025-03-17T18:42:20.350381176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:20.350781 env[1205]: time="2025-03-17T18:42:20.350403827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:20.351400 env[1205]: time="2025-03-17T18:42:20.350738485Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/77f052379e3f96afbf101ea99507be70ba462b2ede233be9928649d64d5b21ab pid=3339 runtime=io.containerd.runc.v2 Mar 17 18:42:20.384324 systemd[1]: Started cri-containerd-77f052379e3f96afbf101ea99507be70ba462b2ede233be9928649d64d5b21ab.scope. Mar 17 18:42:20.505113 env[1205]: time="2025-03-17T18:42:20.505038223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-h5qxr,Uid:3b95f177-a2e8-4e4a-a414-ebe9f78a0873,Namespace:kube-system,Attempt:0,} returns sandbox id \"77f052379e3f96afbf101ea99507be70ba462b2ede233be9928649d64d5b21ab\"" Mar 17 18:42:20.507591 kubelet[1951]: E0317 18:42:20.507186 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:42:20.519460 env[1205]: time="2025-03-17T18:42:20.519402182Z" level=info msg="CreateContainer within sandbox \"77f052379e3f96afbf101ea99507be70ba462b2ede233be9928649d64d5b21ab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:42:20.541072 env[1205]: time="2025-03-17T18:42:20.540986352Z" level=info msg="CreateContainer within sandbox \"77f052379e3f96afbf101ea99507be70ba462b2ede233be9928649d64d5b21ab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c82ed8708344316094fdcdd7dceeff33f50f6729237c782ea42c0db49c7c006\"" Mar 17 18:42:20.545993 env[1205]: time="2025-03-17T18:42:20.543539155Z" level=info msg="StartContainer for \"2c82ed8708344316094fdcdd7dceeff33f50f6729237c782ea42c0db49c7c006\"" Mar 17 18:42:20.599894 systemd[1]: Started cri-containerd-2c82ed8708344316094fdcdd7dceeff33f50f6729237c782ea42c0db49c7c006.scope. Mar 17 18:42:20.691381 env[1205]: time="2025-03-17T18:42:20.690888119Z" level=info msg="StartContainer for \"2c82ed8708344316094fdcdd7dceeff33f50f6729237c782ea42c0db49c7c006\" returns successfully" Mar 17 18:42:20.866422 kubelet[1951]: E0317 18:42:20.866255 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:42:20.878994 kubelet[1951]: E0317 18:42:20.878941 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:42:20.911664 kubelet[1951]: I0317 18:42:20.911575 1951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rbtk9" podStartSLOduration=42.911533124 podStartE2EDuration="42.911533124s" podCreationTimestamp="2025-03-17 18:41:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:42:20.909290671 +0000 UTC m=+46.812866733" watchObservedRunningTime="2025-03-17 18:42:20.911533124 +0000 UTC m=+46.815109188" Mar 17 18:42:20.969329 kubelet[1951]: I0317 18:42:20.969220 1951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-h5qxr" podStartSLOduration=42.969172065 podStartE2EDuration="42.969172065s" podCreationTimestamp="2025-03-17 18:41:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:42:20.966279567 +0000 UTC m=+46.869855632" watchObservedRunningTime="2025-03-17 18:42:20.969172065 +0000 UTC m=+46.872748345" Mar 17 18:42:21.884417 kubelet[1951]: E0317 18:42:21.884368 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:42:21.889284 kubelet[1951]: E0317 18:42:21.886785 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:42:22.888500 kubelet[1951]: E0317 18:42:22.887801 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:42:22.889097 kubelet[1951]: E0317 18:42:22.888822 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:42:43.472177 kubelet[1951]: E0317 18:42:43.471874 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:42:52.476776 kubelet[1951]: E0317 18:42:52.474725 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:42:57.471027 kubelet[1951]: E0317 18:42:57.470975 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:43:00.474077 kubelet[1951]: E0317 18:43:00.473997 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:43:03.475694 systemd[1]: Started sshd@5-134.199.210.114:22-139.178.68.195:39460.service. Mar 17 18:43:03.618669 sshd[3429]: Accepted publickey for core from 139.178.68.195 port 39460 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:43:03.622043 sshd[3429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:03.638836 systemd-logind[1197]: New session 6 of user core. Mar 17 18:43:03.643558 systemd[1]: Started session-6.scope. Mar 17 18:43:04.056509 sshd[3429]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:04.067023 systemd-logind[1197]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:43:04.067286 systemd[1]: sshd@5-134.199.210.114:22-139.178.68.195:39460.service: Deactivated successfully. Mar 17 18:43:04.068563 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:43:04.074039 systemd-logind[1197]: Removed session 6. Mar 17 18:43:09.062473 systemd[1]: Started sshd@6-134.199.210.114:22-139.178.68.195:44342.service. Mar 17 18:43:09.143888 sshd[3442]: Accepted publickey for core from 139.178.68.195 port 44342 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:43:09.145829 sshd[3442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:09.164665 systemd-logind[1197]: New session 7 of user core. Mar 17 18:43:09.165995 systemd[1]: Started session-7.scope. Mar 17 18:43:09.477183 kubelet[1951]: E0317 18:43:09.477100 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:43:09.512075 sshd[3442]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:09.524040 systemd[1]: sshd@6-134.199.210.114:22-139.178.68.195:44342.service: Deactivated successfully. Mar 17 18:43:09.525333 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:43:09.527188 systemd-logind[1197]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:43:09.531617 systemd-logind[1197]: Removed session 7. Mar 17 18:43:14.536930 systemd[1]: Started sshd@7-134.199.210.114:22-139.178.68.195:44344.service. Mar 17 18:43:14.651551 sshd[3456]: Accepted publickey for core from 139.178.68.195 port 44344 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:43:14.654693 sshd[3456]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:14.690629 systemd[1]: Started session-8.scope. Mar 17 18:43:14.692414 systemd-logind[1197]: New session 8 of user core. Mar 17 18:43:14.963774 sshd[3456]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:14.976939 systemd[1]: sshd@7-134.199.210.114:22-139.178.68.195:44344.service: Deactivated successfully. Mar 17 18:43:14.978835 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:43:14.984243 systemd-logind[1197]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:43:14.988911 systemd-logind[1197]: Removed session 8. Mar 17 18:43:19.976174 systemd[1]: Started sshd@8-134.199.210.114:22-139.178.68.195:35792.service. Mar 17 18:43:20.098491 sshd[3469]: Accepted publickey for core from 139.178.68.195 port 35792 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:43:20.102868 sshd[3469]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:20.124240 systemd[1]: Started session-9.scope. Mar 17 18:43:20.127787 systemd-logind[1197]: New session 9 of user core. Mar 17 18:43:20.412030 sshd[3469]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:20.427196 systemd[1]: sshd@8-134.199.210.114:22-139.178.68.195:35792.service: Deactivated successfully. Mar 17 18:43:20.431499 systemd-logind[1197]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:43:20.432524 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:43:20.441047 systemd-logind[1197]: Removed session 9. Mar 17 18:43:25.421633 systemd[1]: Started sshd@9-134.199.210.114:22-139.178.68.195:35804.service. Mar 17 18:43:25.501969 sshd[3481]: Accepted publickey for core from 139.178.68.195 port 35804 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:43:25.505001 sshd[3481]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:25.525672 systemd[1]: Started session-10.scope. Mar 17 18:43:25.527027 systemd-logind[1197]: New session 10 of user core. Mar 17 18:43:25.759487 sshd[3481]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:25.776626 systemd[1]: Started sshd@10-134.199.210.114:22-139.178.68.195:55754.service. Mar 17 18:43:25.778803 systemd[1]: sshd@9-134.199.210.114:22-139.178.68.195:35804.service: Deactivated successfully. Mar 17 18:43:25.781065 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:43:25.785575 systemd-logind[1197]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:43:25.791441 systemd-logind[1197]: Removed session 10. Mar 17 18:43:25.853833 sshd[3493]: Accepted publickey for core from 139.178.68.195 port 55754 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:43:25.856354 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:25.869105 systemd-logind[1197]: New session 11 of user core. Mar 17 18:43:25.870331 systemd[1]: Started session-11.scope. Mar 17 18:43:26.243877 sshd[3493]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:26.259443 systemd[1]: Started sshd@11-134.199.210.114:22-139.178.68.195:55770.service. Mar 17 18:43:26.262213 systemd[1]: sshd@10-134.199.210.114:22-139.178.68.195:55754.service: Deactivated successfully. Mar 17 18:43:26.267008 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:43:26.273510 systemd-logind[1197]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:43:26.276454 systemd-logind[1197]: Removed session 11. Mar 17 18:43:26.340064 sshd[3503]: Accepted publickey for core from 139.178.68.195 port 55770 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:43:26.344857 sshd[3503]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:26.359039 systemd-logind[1197]: New session 12 of user core. Mar 17 18:43:26.361564 systemd[1]: Started session-12.scope. Mar 17 18:43:26.575657 sshd[3503]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:26.581238 systemd-logind[1197]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:43:26.584332 systemd[1]: sshd@11-134.199.210.114:22-139.178.68.195:55770.service: Deactivated successfully. Mar 17 18:43:26.585468 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:43:26.587854 systemd-logind[1197]: Removed session 12. Mar 17 18:43:31.471819 kubelet[1951]: E0317 18:43:31.471757 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:43:31.586013 systemd[1]: Started sshd@12-134.199.210.114:22-139.178.68.195:55782.service. Mar 17 18:43:31.671864 sshd[3516]: Accepted publickey for core from 139.178.68.195 port 55782 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:43:31.694659 sshd[3516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:31.711229 systemd[1]: Started session-13.scope. Mar 17 18:43:31.712760 systemd-logind[1197]: New session 13 of user core. Mar 17 18:43:31.969349 sshd[3516]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:31.975628 systemd[1]: sshd@12-134.199.210.114:22-139.178.68.195:55782.service: Deactivated successfully. Mar 17 18:43:31.977106 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:43:31.979547 systemd-logind[1197]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:43:31.981775 systemd-logind[1197]: Removed session 13. Mar 17 18:43:36.976864 systemd[1]: Started sshd@13-134.199.210.114:22-139.178.68.195:57262.service. Mar 17 18:43:37.065873 sshd[3530]: Accepted publickey for core from 139.178.68.195 port 57262 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:43:37.078480 sshd[3530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:37.095482 systemd-logind[1197]: New session 14 of user core. Mar 17 18:43:37.097229 systemd[1]: Started session-14.scope. Mar 17 18:43:37.434625 sshd[3530]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:37.441678 systemd[1]: sshd@13-134.199.210.114:22-139.178.68.195:57262.service: Deactivated successfully. Mar 17 18:43:37.443032 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:43:37.446024 systemd-logind[1197]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:43:37.449283 systemd-logind[1197]: Removed session 14. Mar 17 18:43:37.471601 kubelet[1951]: E0317 18:43:37.471543 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:43:40.473323 kubelet[1951]: E0317 18:43:40.473254 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:43:42.449641 systemd[1]: Started sshd@14-134.199.210.114:22-139.178.68.195:57274.service. Mar 17 18:43:42.522314 sshd[3544]: Accepted publickey for core from 139.178.68.195 port 57274 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:43:42.526938 sshd[3544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:42.542833 systemd-logind[1197]: New session 15 of user core. Mar 17 18:43:42.547808 systemd[1]: Started session-15.scope. Mar 17 18:43:42.784124 sshd[3544]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:42.792773 systemd-logind[1197]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:43:42.793235 systemd[1]: sshd@14-134.199.210.114:22-139.178.68.195:57274.service: Deactivated successfully. Mar 17 18:43:42.795041 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:43:42.797973 systemd-logind[1197]: Removed session 15. Mar 17 18:43:47.793080 systemd[1]: Started sshd@15-134.199.210.114:22-139.178.68.195:45600.service. Mar 17 18:43:47.891326 sshd[3556]: Accepted publickey for core from 139.178.68.195 port 45600 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:43:47.896939 sshd[3556]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:47.910985 systemd[1]: Started session-16.scope. Mar 17 18:43:47.912316 systemd-logind[1197]: New session 16 of user core. Mar 17 18:43:48.185370 sshd[3556]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:48.197457 systemd[1]: sshd@15-134.199.210.114:22-139.178.68.195:45600.service: Deactivated successfully. Mar 17 18:43:48.198831 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:43:48.200639 systemd-logind[1197]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:43:48.204075 systemd[1]: Started sshd@16-134.199.210.114:22-139.178.68.195:45608.service. Mar 17 18:43:48.207868 systemd-logind[1197]: Removed session 16. Mar 17 18:43:48.275016 sshd[3568]: Accepted publickey for core from 139.178.68.195 port 45608 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:43:48.278766 sshd[3568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:48.289736 systemd-logind[1197]: New session 17 of user core. Mar 17 18:43:48.290704 systemd[1]: Started session-17.scope. Mar 17 18:43:49.052242 sshd[3568]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:49.063698 systemd[1]: sshd@16-134.199.210.114:22-139.178.68.195:45608.service: Deactivated successfully. Mar 17 18:43:49.065670 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:43:49.069446 systemd-logind[1197]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:43:49.075352 systemd[1]: Started sshd@17-134.199.210.114:22-139.178.68.195:45610.service. Mar 17 18:43:49.084447 systemd-logind[1197]: Removed session 17. Mar 17 18:43:49.193843 sshd[3578]: Accepted publickey for core from 139.178.68.195 port 45610 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:43:49.196542 sshd[3578]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:49.206022 systemd-logind[1197]: New session 18 of user core. Mar 17 18:43:49.207527 systemd[1]: Started session-18.scope. Mar 17 18:43:52.324057 sshd[3578]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:52.332853 systemd[1]: Started sshd@18-134.199.210.114:22-139.178.68.195:45618.service. Mar 17 18:43:52.339404 systemd[1]: sshd@17-134.199.210.114:22-139.178.68.195:45610.service: Deactivated successfully. Mar 17 18:43:52.340777 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:43:52.344709 systemd-logind[1197]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:43:52.346603 systemd-logind[1197]: Removed session 18. Mar 17 18:43:52.412086 sshd[3595]: Accepted publickey for core from 139.178.68.195 port 45618 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:43:52.417299 sshd[3595]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:52.431383 systemd[1]: Started session-19.scope. Mar 17 18:43:52.432713 systemd-logind[1197]: New session 19 of user core. Mar 17 18:43:53.009869 sshd[3595]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:53.014934 systemd[1]: sshd@18-134.199.210.114:22-139.178.68.195:45618.service: Deactivated successfully. Mar 17 18:43:53.018299 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:43:53.029687 systemd-logind[1197]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:43:53.041730 systemd[1]: Started sshd@19-134.199.210.114:22-139.178.68.195:45630.service. Mar 17 18:43:53.046169 systemd-logind[1197]: Removed session 19. Mar 17 18:43:53.138261 sshd[3606]: Accepted publickey for core from 139.178.68.195 port 45630 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:43:53.148065 sshd[3606]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:53.166924 systemd[1]: Started session-20.scope. Mar 17 18:43:53.170283 systemd-logind[1197]: New session 20 of user core. Mar 17 18:43:53.425709 sshd[3606]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:53.431811 systemd[1]: sshd@19-134.199.210.114:22-139.178.68.195:45630.service: Deactivated successfully. Mar 17 18:43:53.434761 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:43:53.437029 systemd-logind[1197]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:43:53.439694 systemd-logind[1197]: Removed session 20. Mar 17 18:43:54.473815 kubelet[1951]: E0317 18:43:54.473757 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:43:58.442054 systemd[1]: Started sshd@20-134.199.210.114:22-139.178.68.195:52318.service. Mar 17 18:43:58.513660 sshd[3618]: Accepted publickey for core from 139.178.68.195 port 52318 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:43:58.519610 sshd[3618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:58.530872 systemd-logind[1197]: New session 21 of user core. Mar 17 18:43:58.531029 systemd[1]: Started session-21.scope. Mar 17 18:43:58.727576 sshd[3618]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:58.735061 systemd[1]: sshd@20-134.199.210.114:22-139.178.68.195:52318.service: Deactivated successfully. Mar 17 18:43:58.736452 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:43:58.750561 systemd-logind[1197]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:43:58.760476 systemd-logind[1197]: Removed session 21. Mar 17 18:43:59.472979 kubelet[1951]: E0317 18:43:59.470935 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:44:03.737797 systemd[1]: Started sshd@21-134.199.210.114:22-139.178.68.195:52334.service. Mar 17 18:44:03.850869 sshd[3633]: Accepted publickey for core from 139.178.68.195 port 52334 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:03.855343 sshd[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:03.892892 systemd[1]: Started session-22.scope. Mar 17 18:44:03.893976 systemd-logind[1197]: New session 22 of user core. Mar 17 18:44:04.192632 sshd[3633]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:04.200288 systemd[1]: sshd@21-134.199.210.114:22-139.178.68.195:52334.service: Deactivated successfully. Mar 17 18:44:04.200339 systemd-logind[1197]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:44:04.201768 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:44:04.204110 systemd-logind[1197]: Removed session 22. Mar 17 18:44:05.470982 kubelet[1951]: E0317 18:44:05.470921 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:44:09.205721 systemd[1]: Started sshd@22-134.199.210.114:22-139.178.68.195:33016.service. Mar 17 18:44:09.276210 sshd[3645]: Accepted publickey for core from 139.178.68.195 port 33016 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:09.280008 sshd[3645]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:09.297291 systemd[1]: Started session-23.scope. Mar 17 18:44:09.298374 systemd-logind[1197]: New session 23 of user core. Mar 17 18:44:09.607729 sshd[3645]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:09.618567 systemd[1]: sshd@22-134.199.210.114:22-139.178.68.195:33016.service: Deactivated successfully. Mar 17 18:44:09.620024 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:44:09.626868 systemd-logind[1197]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:44:09.644494 systemd-logind[1197]: Removed session 23. Mar 17 18:44:14.621000 systemd[1]: Started sshd@23-134.199.210.114:22-139.178.68.195:33022.service. Mar 17 18:44:14.736918 sshd[3659]: Accepted publickey for core from 139.178.68.195 port 33022 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:14.740785 sshd[3659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:14.756037 systemd-logind[1197]: New session 24 of user core. Mar 17 18:44:14.757130 systemd[1]: Started session-24.scope. Mar 17 18:44:15.020823 sshd[3659]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:15.027022 systemd[1]: sshd@23-134.199.210.114:22-139.178.68.195:33022.service: Deactivated successfully. Mar 17 18:44:15.028682 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:44:15.030303 systemd-logind[1197]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:44:15.033993 systemd-logind[1197]: Removed session 24. Mar 17 18:44:15.472865 kubelet[1951]: E0317 18:44:15.472808 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:44:20.036634 systemd[1]: Started sshd@24-134.199.210.114:22-139.178.68.195:45430.service. Mar 17 18:44:20.123376 sshd[3671]: Accepted publickey for core from 139.178.68.195 port 45430 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:20.128369 sshd[3671]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:20.161315 systemd[1]: Started session-25.scope. Mar 17 18:44:20.162675 systemd-logind[1197]: New session 25 of user core. Mar 17 18:44:20.410803 sshd[3671]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:20.416134 systemd[1]: sshd@24-134.199.210.114:22-139.178.68.195:45430.service: Deactivated successfully. Mar 17 18:44:20.417778 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:44:20.419371 systemd-logind[1197]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:44:20.420908 systemd-logind[1197]: Removed session 25. Mar 17 18:44:25.424977 systemd[1]: Started sshd@25-134.199.210.114:22-139.178.68.195:45438.service. Mar 17 18:44:25.522217 sshd[3684]: Accepted publickey for core from 139.178.68.195 port 45438 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:25.526827 sshd[3684]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:25.555844 systemd[1]: Started session-26.scope. Mar 17 18:44:25.556538 systemd-logind[1197]: New session 26 of user core. Mar 17 18:44:25.901016 sshd[3684]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:25.913300 systemd[1]: Started sshd@26-134.199.210.114:22-139.178.68.195:60178.service. Mar 17 18:44:25.927620 systemd[1]: sshd@25-134.199.210.114:22-139.178.68.195:45438.service: Deactivated successfully. Mar 17 18:44:25.930372 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 18:44:25.932270 systemd-logind[1197]: Session 26 logged out. Waiting for processes to exit. Mar 17 18:44:25.935449 systemd-logind[1197]: Removed session 26. Mar 17 18:44:26.009827 sshd[3694]: Accepted publickey for core from 139.178.68.195 port 60178 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:26.012387 sshd[3694]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:26.024200 systemd[1]: Started session-27.scope. Mar 17 18:44:26.024841 systemd-logind[1197]: New session 27 of user core. Mar 17 18:44:29.051341 env[1205]: time="2025-03-17T18:44:29.051263484Z" level=info msg="StopContainer for \"a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a\" with timeout 30 (s)" Mar 17 18:44:29.051341 env[1205]: time="2025-03-17T18:44:29.051836593Z" level=info msg="Stop container \"a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a\" with signal terminated" Mar 17 18:44:29.083948 systemd[1]: run-containerd-runc-k8s.io-5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2-runc.jVyyv4.mount: Deactivated successfully. Mar 17 18:44:29.106195 systemd[1]: cri-containerd-a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a.scope: Deactivated successfully. Mar 17 18:44:29.182905 env[1205]: time="2025-03-17T18:44:29.182530413Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:44:29.194100 env[1205]: time="2025-03-17T18:44:29.194022024Z" level=info msg="StopContainer for \"5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2\" with timeout 2 (s)" Mar 17 18:44:29.194629 env[1205]: time="2025-03-17T18:44:29.194536536Z" level=info msg="Stop container \"5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2\" with signal terminated" Mar 17 18:44:29.201761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a-rootfs.mount: Deactivated successfully. Mar 17 18:44:29.214914 systemd-networkd[1015]: lxc_health: Link DOWN Mar 17 18:44:29.214926 systemd-networkd[1015]: lxc_health: Lost carrier Mar 17 18:44:29.219938 env[1205]: time="2025-03-17T18:44:29.219864484Z" level=info msg="shim disconnected" id=a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a Mar 17 18:44:29.220341 env[1205]: time="2025-03-17T18:44:29.219956760Z" level=warning msg="cleaning up after shim disconnected" id=a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a namespace=k8s.io Mar 17 18:44:29.220341 env[1205]: time="2025-03-17T18:44:29.219973949Z" level=info msg="cleaning up dead shim" Mar 17 18:44:29.257431 systemd[1]: cri-containerd-5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2.scope: Deactivated successfully. Mar 17 18:44:29.257955 systemd[1]: cri-containerd-5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2.scope: Consumed 14.111s CPU time. Mar 17 18:44:29.275458 env[1205]: time="2025-03-17T18:44:29.275386885Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3753 runtime=io.containerd.runc.v2\n" Mar 17 18:44:29.279477 env[1205]: time="2025-03-17T18:44:29.279403714Z" level=info msg="StopContainer for \"a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a\" returns successfully" Mar 17 18:44:29.280923 env[1205]: time="2025-03-17T18:44:29.280859148Z" level=info msg="StopPodSandbox for \"92867cd1c69d2b9669797580d2a16b68a8f3675f8ef6f3572b9b3ee18e418181\"" Mar 17 18:44:29.281750 env[1205]: time="2025-03-17T18:44:29.281687721Z" level=info msg="Container to stop \"a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:44:29.286034 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92867cd1c69d2b9669797580d2a16b68a8f3675f8ef6f3572b9b3ee18e418181-shm.mount: Deactivated successfully. Mar 17 18:44:29.306910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2-rootfs.mount: Deactivated successfully. Mar 17 18:44:29.316912 systemd[1]: cri-containerd-92867cd1c69d2b9669797580d2a16b68a8f3675f8ef6f3572b9b3ee18e418181.scope: Deactivated successfully. Mar 17 18:44:29.338741 env[1205]: time="2025-03-17T18:44:29.338656460Z" level=info msg="shim disconnected" id=5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2 Mar 17 18:44:29.339199 env[1205]: time="2025-03-17T18:44:29.339115758Z" level=warning msg="cleaning up after shim disconnected" id=5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2 namespace=k8s.io Mar 17 18:44:29.339395 env[1205]: time="2025-03-17T18:44:29.339365303Z" level=info msg="cleaning up dead shim" Mar 17 18:44:29.378795 env[1205]: time="2025-03-17T18:44:29.378733217Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3786 runtime=io.containerd.runc.v2\n" Mar 17 18:44:29.384426 env[1205]: time="2025-03-17T18:44:29.384352561Z" level=info msg="StopContainer for \"5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2\" returns successfully" Mar 17 18:44:29.385592 env[1205]: time="2025-03-17T18:44:29.385541983Z" level=info msg="StopPodSandbox for \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\"" Mar 17 18:44:29.386080 env[1205]: time="2025-03-17T18:44:29.386036261Z" level=info msg="Container to stop \"6486801a5b7a81b7acb069f6f8e81c4d2872dba9ba2044b58109618b7092b562\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:44:29.386467 env[1205]: time="2025-03-17T18:44:29.386425222Z" level=info msg="Container to stop \"f370d50b5adc3036ef0fd1e016f4622c7666261b77eee1b4709874d6a3fe7fbe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:44:29.386640 env[1205]: time="2025-03-17T18:44:29.386609739Z" level=info msg="Container to stop \"804db83b3f6fcaf6ba243b77d33ed919f93199a7d948c6682c084827d7ee5d63\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:44:29.387105 env[1205]: time="2025-03-17T18:44:29.387058543Z" level=info msg="Container to stop \"e8c3d7ea3c997f574a493002efe148066c724932e0aa73418474e818fb006ca7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:44:29.387291 env[1205]: time="2025-03-17T18:44:29.387259772Z" level=info msg="Container to stop \"5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:44:29.417196 env[1205]: time="2025-03-17T18:44:29.417051344Z" level=info msg="shim disconnected" id=92867cd1c69d2b9669797580d2a16b68a8f3675f8ef6f3572b9b3ee18e418181 Mar 17 18:44:29.417196 env[1205]: time="2025-03-17T18:44:29.417159317Z" level=warning msg="cleaning up after shim disconnected" id=92867cd1c69d2b9669797580d2a16b68a8f3675f8ef6f3572b9b3ee18e418181 namespace=k8s.io Mar 17 18:44:29.417196 env[1205]: time="2025-03-17T18:44:29.417177163Z" level=info msg="cleaning up dead shim" Mar 17 18:44:29.420798 systemd[1]: cri-containerd-6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5.scope: Deactivated successfully. Mar 17 18:44:29.442579 env[1205]: time="2025-03-17T18:44:29.442491979Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3818 runtime=io.containerd.runc.v2\n" Mar 17 18:44:29.443311 env[1205]: time="2025-03-17T18:44:29.442994306Z" level=info msg="TearDown network for sandbox \"92867cd1c69d2b9669797580d2a16b68a8f3675f8ef6f3572b9b3ee18e418181\" successfully" Mar 17 18:44:29.443311 env[1205]: time="2025-03-17T18:44:29.443047887Z" level=info msg="StopPodSandbox for \"92867cd1c69d2b9669797580d2a16b68a8f3675f8ef6f3572b9b3ee18e418181\" returns successfully" Mar 17 18:44:29.484022 env[1205]: time="2025-03-17T18:44:29.483950466Z" level=info msg="shim disconnected" id=6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5 Mar 17 18:44:29.484497 env[1205]: time="2025-03-17T18:44:29.484449576Z" level=warning msg="cleaning up after shim disconnected" id=6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5 namespace=k8s.io Mar 17 18:44:29.484852 env[1205]: time="2025-03-17T18:44:29.484750685Z" level=info msg="cleaning up dead shim" Mar 17 18:44:29.527621 env[1205]: time="2025-03-17T18:44:29.526842291Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3843 runtime=io.containerd.runc.v2\n" Mar 17 18:44:29.527621 env[1205]: time="2025-03-17T18:44:29.527482720Z" level=info msg="TearDown network for sandbox \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\" successfully" Mar 17 18:44:29.527621 env[1205]: time="2025-03-17T18:44:29.527523700Z" level=info msg="StopPodSandbox for \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\" returns successfully" Mar 17 18:44:29.529509 kubelet[1951]: I0317 18:44:29.529454 1951 scope.go:117] "RemoveContainer" containerID="a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a" Mar 17 18:44:29.539724 env[1205]: time="2025-03-17T18:44:29.539642683Z" level=info msg="RemoveContainer for \"a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a\"" Mar 17 18:44:29.561385 env[1205]: time="2025-03-17T18:44:29.558121736Z" level=info msg="RemoveContainer for \"a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a\" returns successfully" Mar 17 18:44:29.567761 kubelet[1951]: I0317 18:44:29.566565 1951 scope.go:117] "RemoveContainer" containerID="a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a" Mar 17 18:44:29.568657 env[1205]: time="2025-03-17T18:44:29.568508268Z" level=error msg="ContainerStatus for \"a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a\": not found" Mar 17 18:44:29.572866 kubelet[1951]: E0317 18:44:29.572791 1951 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a\": not found" containerID="a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a" Mar 17 18:44:29.573455 kubelet[1951]: I0317 18:44:29.573208 1951 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a"} err="failed to get container status \"a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a\": rpc error: code = NotFound desc = an error occurred when try to find container \"a1a9415cf0d93b49d87e4c1dbf4703d21de35edb30a7df912da07d339b60db3a\": not found" Mar 17 18:44:29.620025 kubelet[1951]: I0317 18:44:29.619949 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d248573-527e-479a-8562-24efc3702407-cilium-config-path\") pod \"9d248573-527e-479a-8562-24efc3702407\" (UID: \"9d248573-527e-479a-8562-24efc3702407\") " Mar 17 18:44:29.620433 kubelet[1951]: I0317 18:44:29.620394 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts5v2\" (UniqueName: \"kubernetes.io/projected/9d248573-527e-479a-8562-24efc3702407-kube-api-access-ts5v2\") pod \"9d248573-527e-479a-8562-24efc3702407\" (UID: \"9d248573-527e-479a-8562-24efc3702407\") " Mar 17 18:44:29.646582 kubelet[1951]: I0317 18:44:29.640225 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d248573-527e-479a-8562-24efc3702407-kube-api-access-ts5v2" (OuterVolumeSpecName: "kube-api-access-ts5v2") pod "9d248573-527e-479a-8562-24efc3702407" (UID: "9d248573-527e-479a-8562-24efc3702407"). InnerVolumeSpecName "kube-api-access-ts5v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:44:29.646582 kubelet[1951]: I0317 18:44:29.646233 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d248573-527e-479a-8562-24efc3702407-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9d248573-527e-479a-8562-24efc3702407" (UID: "9d248573-527e-479a-8562-24efc3702407"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:44:29.676415 kubelet[1951]: E0317 18:44:29.676366 1951 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:44:29.725438 kubelet[1951]: I0317 18:44:29.721929 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdslj\" (UniqueName: \"kubernetes.io/projected/3bc86c48-db3f-499b-b59a-11a003b1c9d1-kube-api-access-gdslj\") pod \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " Mar 17 18:44:29.725438 kubelet[1951]: I0317 18:44:29.724273 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3bc86c48-db3f-499b-b59a-11a003b1c9d1-hubble-tls\") pod \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " Mar 17 18:44:29.725438 kubelet[1951]: I0317 18:44:29.724320 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-lib-modules\") pod \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " Mar 17 18:44:29.725438 kubelet[1951]: I0317 18:44:29.724345 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-host-proc-sys-kernel\") pod \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " Mar 17 18:44:29.725438 kubelet[1951]: I0317 18:44:29.724373 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-cilium-cgroup\") pod \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " Mar 17 18:44:29.725438 kubelet[1951]: I0317 18:44:29.724396 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-cilium-run\") pod \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " Mar 17 18:44:29.726402 kubelet[1951]: I0317 18:44:29.724419 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-hostproc\") pod \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " Mar 17 18:44:29.726402 kubelet[1951]: I0317 18:44:29.724443 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-xtables-lock\") pod \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " Mar 17 18:44:29.726402 kubelet[1951]: I0317 18:44:29.724466 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-etc-cni-netd\") pod \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " Mar 17 18:44:29.726402 kubelet[1951]: I0317 18:44:29.724495 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-bpf-maps\") pod \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " Mar 17 18:44:29.726402 kubelet[1951]: I0317 18:44:29.724526 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bc86c48-db3f-499b-b59a-11a003b1c9d1-cilium-config-path\") pod \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " Mar 17 18:44:29.726402 kubelet[1951]: I0317 18:44:29.724549 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-cni-path\") pod \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " Mar 17 18:44:29.726869 kubelet[1951]: I0317 18:44:29.724650 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-host-proc-sys-net\") pod \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " Mar 17 18:44:29.726869 kubelet[1951]: I0317 18:44:29.724688 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3bc86c48-db3f-499b-b59a-11a003b1c9d1-clustermesh-secrets\") pod \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\" (UID: \"3bc86c48-db3f-499b-b59a-11a003b1c9d1\") " Mar 17 18:44:29.728455 kubelet[1951]: I0317 18:44:29.728399 1951 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d248573-527e-479a-8562-24efc3702407-cilium-config-path\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:29.728455 kubelet[1951]: I0317 18:44:29.728460 1951 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ts5v2\" (UniqueName: \"kubernetes.io/projected/9d248573-527e-479a-8562-24efc3702407-kube-api-access-ts5v2\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:29.733972 kubelet[1951]: I0317 18:44:29.733865 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bc86c48-db3f-499b-b59a-11a003b1c9d1-kube-api-access-gdslj" (OuterVolumeSpecName: "kube-api-access-gdslj") pod "3bc86c48-db3f-499b-b59a-11a003b1c9d1" (UID: "3bc86c48-db3f-499b-b59a-11a003b1c9d1"). InnerVolumeSpecName "kube-api-access-gdslj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:44:29.733972 kubelet[1951]: I0317 18:44:29.733980 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3bc86c48-db3f-499b-b59a-11a003b1c9d1" (UID: "3bc86c48-db3f-499b-b59a-11a003b1c9d1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:29.734350 kubelet[1951]: I0317 18:44:29.734013 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3bc86c48-db3f-499b-b59a-11a003b1c9d1" (UID: "3bc86c48-db3f-499b-b59a-11a003b1c9d1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:29.734350 kubelet[1951]: I0317 18:44:29.734037 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3bc86c48-db3f-499b-b59a-11a003b1c9d1" (UID: "3bc86c48-db3f-499b-b59a-11a003b1c9d1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:29.734350 kubelet[1951]: I0317 18:44:29.734059 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3bc86c48-db3f-499b-b59a-11a003b1c9d1" (UID: "3bc86c48-db3f-499b-b59a-11a003b1c9d1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:29.734350 kubelet[1951]: I0317 18:44:29.734080 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3bc86c48-db3f-499b-b59a-11a003b1c9d1" (UID: "3bc86c48-db3f-499b-b59a-11a003b1c9d1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:29.734350 kubelet[1951]: I0317 18:44:29.734100 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-hostproc" (OuterVolumeSpecName: "hostproc") pod "3bc86c48-db3f-499b-b59a-11a003b1c9d1" (UID: "3bc86c48-db3f-499b-b59a-11a003b1c9d1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:29.738803 kubelet[1951]: I0317 18:44:29.738676 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bc86c48-db3f-499b-b59a-11a003b1c9d1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3bc86c48-db3f-499b-b59a-11a003b1c9d1" (UID: "3bc86c48-db3f-499b-b59a-11a003b1c9d1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:44:29.738803 kubelet[1951]: I0317 18:44:29.738791 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-cni-path" (OuterVolumeSpecName: "cni-path") pod "3bc86c48-db3f-499b-b59a-11a003b1c9d1" (UID: "3bc86c48-db3f-499b-b59a-11a003b1c9d1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:29.739070 kubelet[1951]: I0317 18:44:29.738821 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3bc86c48-db3f-499b-b59a-11a003b1c9d1" (UID: "3bc86c48-db3f-499b-b59a-11a003b1c9d1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:29.739518 kubelet[1951]: I0317 18:44:29.739469 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bc86c48-db3f-499b-b59a-11a003b1c9d1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3bc86c48-db3f-499b-b59a-11a003b1c9d1" (UID: "3bc86c48-db3f-499b-b59a-11a003b1c9d1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:44:29.739738 kubelet[1951]: I0317 18:44:29.739711 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3bc86c48-db3f-499b-b59a-11a003b1c9d1" (UID: "3bc86c48-db3f-499b-b59a-11a003b1c9d1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:29.739857 kubelet[1951]: I0317 18:44:29.739839 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3bc86c48-db3f-499b-b59a-11a003b1c9d1" (UID: "3bc86c48-db3f-499b-b59a-11a003b1c9d1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:29.745026 kubelet[1951]: I0317 18:44:29.744946 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bc86c48-db3f-499b-b59a-11a003b1c9d1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3bc86c48-db3f-499b-b59a-11a003b1c9d1" (UID: "3bc86c48-db3f-499b-b59a-11a003b1c9d1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:44:29.819759 systemd[1]: Removed slice kubepods-besteffort-pod9d248573_527e_479a_8562_24efc3702407.slice. Mar 17 18:44:29.841000 kubelet[1951]: I0317 18:44:29.829199 1951 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-host-proc-sys-net\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:29.841000 kubelet[1951]: I0317 18:44:29.829284 1951 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3bc86c48-db3f-499b-b59a-11a003b1c9d1-clustermesh-secrets\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:29.841000 kubelet[1951]: I0317 18:44:29.829304 1951 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gdslj\" (UniqueName: \"kubernetes.io/projected/3bc86c48-db3f-499b-b59a-11a003b1c9d1-kube-api-access-gdslj\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:29.841000 kubelet[1951]: I0317 18:44:29.829321 1951 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3bc86c48-db3f-499b-b59a-11a003b1c9d1-hubble-tls\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:29.841000 kubelet[1951]: I0317 18:44:29.829371 1951 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-lib-modules\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:29.841000 kubelet[1951]: I0317 18:44:29.829388 1951 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-host-proc-sys-kernel\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:29.841000 kubelet[1951]: I0317 18:44:29.829408 1951 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-cilium-cgroup\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:29.841000 kubelet[1951]: I0317 18:44:29.829454 1951 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-cilium-run\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:29.842040 kubelet[1951]: I0317 18:44:29.829467 1951 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-hostproc\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:29.842040 kubelet[1951]: I0317 18:44:29.829482 1951 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-xtables-lock\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:29.842040 kubelet[1951]: I0317 18:44:29.829524 1951 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-etc-cni-netd\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:29.842040 kubelet[1951]: I0317 18:44:29.829544 1951 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-bpf-maps\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:29.842040 kubelet[1951]: I0317 18:44:29.829561 1951 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bc86c48-db3f-499b-b59a-11a003b1c9d1-cilium-config-path\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:29.842040 kubelet[1951]: I0317 18:44:29.829602 1951 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3bc86c48-db3f-499b-b59a-11a003b1c9d1-cni-path\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:30.065787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5-rootfs.mount: Deactivated successfully. Mar 17 18:44:30.066268 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5-shm.mount: Deactivated successfully. Mar 17 18:44:30.066523 systemd[1]: var-lib-kubelet-pods-3bc86c48\x2ddb3f\x2d499b\x2db59a\x2d11a003b1c9d1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgdslj.mount: Deactivated successfully. Mar 17 18:44:30.066746 systemd[1]: var-lib-kubelet-pods-3bc86c48\x2ddb3f\x2d499b\x2db59a\x2d11a003b1c9d1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:44:30.066990 systemd[1]: var-lib-kubelet-pods-3bc86c48\x2ddb3f\x2d499b\x2db59a\x2d11a003b1c9d1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:44:30.067271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92867cd1c69d2b9669797580d2a16b68a8f3675f8ef6f3572b9b3ee18e418181-rootfs.mount: Deactivated successfully. Mar 17 18:44:30.067526 systemd[1]: var-lib-kubelet-pods-9d248573\x2d527e\x2d479a\x2d8562\x2d24efc3702407-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dts5v2.mount: Deactivated successfully. Mar 17 18:44:30.475873 kubelet[1951]: I0317 18:44:30.475690 1951 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d248573-527e-479a-8562-24efc3702407" path="/var/lib/kubelet/pods/9d248573-527e-479a-8562-24efc3702407/volumes" Mar 17 18:44:30.484185 systemd[1]: Removed slice kubepods-burstable-pod3bc86c48_db3f_499b_b59a_11a003b1c9d1.slice. Mar 17 18:44:30.484354 systemd[1]: kubepods-burstable-pod3bc86c48_db3f_499b_b59a_11a003b1c9d1.slice: Consumed 14.318s CPU time. Mar 17 18:44:30.557175 kubelet[1951]: I0317 18:44:30.556640 1951 scope.go:117] "RemoveContainer" containerID="5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2" Mar 17 18:44:30.592794 env[1205]: time="2025-03-17T18:44:30.592706792Z" level=info msg="RemoveContainer for \"5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2\"" Mar 17 18:44:30.601217 env[1205]: time="2025-03-17T18:44:30.600085902Z" level=info msg="RemoveContainer for \"5c337c2e1e185a7aad125ee6aa71b9e44c1025e7f7c014fce7dadec790ab90d2\" returns successfully" Mar 17 18:44:30.602341 kubelet[1951]: I0317 18:44:30.602282 1951 scope.go:117] "RemoveContainer" containerID="804db83b3f6fcaf6ba243b77d33ed919f93199a7d948c6682c084827d7ee5d63" Mar 17 18:44:30.606124 env[1205]: time="2025-03-17T18:44:30.606062272Z" level=info msg="RemoveContainer for \"804db83b3f6fcaf6ba243b77d33ed919f93199a7d948c6682c084827d7ee5d63\"" Mar 17 18:44:30.621371 env[1205]: time="2025-03-17T18:44:30.618825354Z" level=info msg="RemoveContainer for \"804db83b3f6fcaf6ba243b77d33ed919f93199a7d948c6682c084827d7ee5d63\" returns successfully" Mar 17 18:44:30.621734 kubelet[1951]: I0317 18:44:30.619615 1951 scope.go:117] "RemoveContainer" containerID="f370d50b5adc3036ef0fd1e016f4622c7666261b77eee1b4709874d6a3fe7fbe" Mar 17 18:44:30.629623 env[1205]: time="2025-03-17T18:44:30.629543563Z" level=info msg="RemoveContainer for \"f370d50b5adc3036ef0fd1e016f4622c7666261b77eee1b4709874d6a3fe7fbe\"" Mar 17 18:44:30.635969 env[1205]: time="2025-03-17T18:44:30.635890801Z" level=info msg="RemoveContainer for \"f370d50b5adc3036ef0fd1e016f4622c7666261b77eee1b4709874d6a3fe7fbe\" returns successfully" Mar 17 18:44:30.636647 kubelet[1951]: I0317 18:44:30.636592 1951 scope.go:117] "RemoveContainer" containerID="e8c3d7ea3c997f574a493002efe148066c724932e0aa73418474e818fb006ca7" Mar 17 18:44:30.639665 env[1205]: time="2025-03-17T18:44:30.639593633Z" level=info msg="RemoveContainer for \"e8c3d7ea3c997f574a493002efe148066c724932e0aa73418474e818fb006ca7\"" Mar 17 18:44:30.655805 env[1205]: time="2025-03-17T18:44:30.655728172Z" level=info msg="RemoveContainer for \"e8c3d7ea3c997f574a493002efe148066c724932e0aa73418474e818fb006ca7\" returns successfully" Mar 17 18:44:30.657903 kubelet[1951]: I0317 18:44:30.657686 1951 scope.go:117] "RemoveContainer" containerID="6486801a5b7a81b7acb069f6f8e81c4d2872dba9ba2044b58109618b7092b562" Mar 17 18:44:30.665604 env[1205]: time="2025-03-17T18:44:30.660691905Z" level=info msg="RemoveContainer for \"6486801a5b7a81b7acb069f6f8e81c4d2872dba9ba2044b58109618b7092b562\"" Mar 17 18:44:30.669116 env[1205]: time="2025-03-17T18:44:30.669044728Z" level=info msg="RemoveContainer for \"6486801a5b7a81b7acb069f6f8e81c4d2872dba9ba2044b58109618b7092b562\" returns successfully" Mar 17 18:44:30.792875 sshd[3694]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:30.802042 systemd[1]: sshd@26-134.199.210.114:22-139.178.68.195:60178.service: Deactivated successfully. Mar 17 18:44:30.804077 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 18:44:30.804912 systemd[1]: session-27.scope: Consumed 1.353s CPU time. Mar 17 18:44:30.806958 systemd-logind[1197]: Session 27 logged out. Waiting for processes to exit. Mar 17 18:44:30.812646 systemd[1]: Started sshd@27-134.199.210.114:22-139.178.68.195:60190.service. Mar 17 18:44:30.823710 systemd-logind[1197]: Removed session 27. Mar 17 18:44:30.929397 sshd[3863]: Accepted publickey for core from 139.178.68.195 port 60190 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:30.931064 sshd[3863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:30.954325 systemd[1]: Started session-28.scope. Mar 17 18:44:30.955259 systemd-logind[1197]: New session 28 of user core. Mar 17 18:44:32.245000 sshd[3863]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:32.254858 systemd[1]: Started sshd@28-134.199.210.114:22-139.178.68.195:60194.service. Mar 17 18:44:32.255866 systemd[1]: sshd@27-134.199.210.114:22-139.178.68.195:60190.service: Deactivated successfully. Mar 17 18:44:32.260804 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 18:44:32.264835 systemd-logind[1197]: Session 28 logged out. Waiting for processes to exit. Mar 17 18:44:32.267986 systemd-logind[1197]: Removed session 28. Mar 17 18:44:32.325029 sshd[3872]: Accepted publickey for core from 139.178.68.195 port 60194 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:32.327528 sshd[3872]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:32.335892 systemd-logind[1197]: New session 29 of user core. Mar 17 18:44:32.337293 systemd[1]: Started session-29.scope. Mar 17 18:44:32.356629 kubelet[1951]: E0317 18:44:32.356569 1951 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3bc86c48-db3f-499b-b59a-11a003b1c9d1" containerName="mount-cgroup" Mar 17 18:44:32.357369 kubelet[1951]: E0317 18:44:32.357336 1951 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3bc86c48-db3f-499b-b59a-11a003b1c9d1" containerName="apply-sysctl-overwrites" Mar 17 18:44:32.357599 kubelet[1951]: E0317 18:44:32.357580 1951 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3bc86c48-db3f-499b-b59a-11a003b1c9d1" containerName="clean-cilium-state" Mar 17 18:44:32.357735 kubelet[1951]: E0317 18:44:32.357716 1951 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d248573-527e-479a-8562-24efc3702407" containerName="cilium-operator" Mar 17 18:44:32.357840 kubelet[1951]: E0317 18:44:32.357823 1951 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3bc86c48-db3f-499b-b59a-11a003b1c9d1" containerName="mount-bpf-fs" Mar 17 18:44:32.357929 kubelet[1951]: E0317 18:44:32.357913 1951 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3bc86c48-db3f-499b-b59a-11a003b1c9d1" containerName="cilium-agent" Mar 17 18:44:32.359328 kubelet[1951]: I0317 18:44:32.359271 1951 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d248573-527e-479a-8562-24efc3702407" containerName="cilium-operator" Mar 17 18:44:32.359606 kubelet[1951]: I0317 18:44:32.359579 1951 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bc86c48-db3f-499b-b59a-11a003b1c9d1" containerName="cilium-agent" Mar 17 18:44:32.386011 kubelet[1951]: I0317 18:44:32.385951 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-cilium-run\") pod \"cilium-tr4vt\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " pod="kube-system/cilium-tr4vt" Mar 17 18:44:32.386538 kubelet[1951]: I0317 18:44:32.386497 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-hostproc\") pod \"cilium-tr4vt\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " pod="kube-system/cilium-tr4vt" Mar 17 18:44:32.386764 kubelet[1951]: I0317 18:44:32.386731 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-etc-cni-netd\") pod \"cilium-tr4vt\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " pod="kube-system/cilium-tr4vt" Mar 17 18:44:32.388500 kubelet[1951]: I0317 18:44:32.388464 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-clustermesh-secrets\") pod \"cilium-tr4vt\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " pod="kube-system/cilium-tr4vt" Mar 17 18:44:32.388678 kubelet[1951]: I0317 18:44:32.388653 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-host-proc-sys-net\") pod \"cilium-tr4vt\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " pod="kube-system/cilium-tr4vt" Mar 17 18:44:32.388810 kubelet[1951]: I0317 18:44:32.388788 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-hubble-tls\") pod \"cilium-tr4vt\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " pod="kube-system/cilium-tr4vt" Mar 17 18:44:32.388943 kubelet[1951]: I0317 18:44:32.388913 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-cilium-cgroup\") pod \"cilium-tr4vt\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " pod="kube-system/cilium-tr4vt" Mar 17 18:44:32.389063 kubelet[1951]: I0317 18:44:32.389042 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-host-proc-sys-kernel\") pod \"cilium-tr4vt\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " pod="kube-system/cilium-tr4vt" Mar 17 18:44:32.389221 kubelet[1951]: I0317 18:44:32.389198 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-lib-modules\") pod \"cilium-tr4vt\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " pod="kube-system/cilium-tr4vt" Mar 17 18:44:32.389404 kubelet[1951]: I0317 18:44:32.389380 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-cni-path\") pod \"cilium-tr4vt\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " pod="kube-system/cilium-tr4vt" Mar 17 18:44:32.389527 kubelet[1951]: I0317 18:44:32.389505 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-cilium-config-path\") pod \"cilium-tr4vt\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " pod="kube-system/cilium-tr4vt" Mar 17 18:44:32.389653 kubelet[1951]: I0317 18:44:32.389632 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-cilium-ipsec-secrets\") pod \"cilium-tr4vt\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " pod="kube-system/cilium-tr4vt" Mar 17 18:44:32.389777 kubelet[1951]: I0317 18:44:32.389753 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-xtables-lock\") pod \"cilium-tr4vt\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " pod="kube-system/cilium-tr4vt" Mar 17 18:44:32.392361 kubelet[1951]: I0317 18:44:32.392282 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-bpf-maps\") pod \"cilium-tr4vt\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " pod="kube-system/cilium-tr4vt" Mar 17 18:44:32.392716 kubelet[1951]: I0317 18:44:32.392685 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glljb\" (UniqueName: \"kubernetes.io/projected/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-kube-api-access-glljb\") pod \"cilium-tr4vt\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " pod="kube-system/cilium-tr4vt" Mar 17 18:44:32.407357 systemd[1]: Created slice kubepods-burstable-podd7cf0f29_37ee_445c_a8b3_33708fa2ccf9.slice. Mar 17 18:44:32.482338 kubelet[1951]: I0317 18:44:32.482282 1951 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bc86c48-db3f-499b-b59a-11a003b1c9d1" path="/var/lib/kubelet/pods/3bc86c48-db3f-499b-b59a-11a003b1c9d1/volumes" Mar 17 18:44:32.702216 sshd[3872]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:32.712437 systemd[1]: sshd@28-134.199.210.114:22-139.178.68.195:60194.service: Deactivated successfully. Mar 17 18:44:32.714367 kubelet[1951]: E0317 18:44:32.714316 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:44:32.716207 env[1205]: time="2025-03-17T18:44:32.715432646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tr4vt,Uid:d7cf0f29-37ee-445c-a8b3-33708fa2ccf9,Namespace:kube-system,Attempt:0,}" Mar 17 18:44:32.719813 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 18:44:32.722291 systemd-logind[1197]: Session 29 logged out. Waiting for processes to exit. Mar 17 18:44:32.727041 systemd[1]: Started sshd@29-134.199.210.114:22-139.178.68.195:60198.service. Mar 17 18:44:32.735454 systemd-logind[1197]: Removed session 29. Mar 17 18:44:32.782346 env[1205]: time="2025-03-17T18:44:32.781131783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:44:32.782346 env[1205]: time="2025-03-17T18:44:32.781229619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:44:32.782346 env[1205]: time="2025-03-17T18:44:32.781247353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:44:32.782346 env[1205]: time="2025-03-17T18:44:32.781556946Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ca18633ef6db7ade9088b040039e3ec65dfbc9c16b5eb6310daedbd7c30d19f pid=3896 runtime=io.containerd.runc.v2 Mar 17 18:44:32.811000 systemd[1]: Started cri-containerd-0ca18633ef6db7ade9088b040039e3ec65dfbc9c16b5eb6310daedbd7c30d19f.scope. Mar 17 18:44:32.815881 sshd[3888]: Accepted publickey for core from 139.178.68.195 port 60198 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:32.818699 sshd[3888]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:32.840122 systemd[1]: Started session-30.scope. Mar 17 18:44:32.841581 systemd-logind[1197]: New session 30 of user core. Mar 17 18:44:32.877910 env[1205]: time="2025-03-17T18:44:32.877831170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tr4vt,Uid:d7cf0f29-37ee-445c-a8b3-33708fa2ccf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ca18633ef6db7ade9088b040039e3ec65dfbc9c16b5eb6310daedbd7c30d19f\"" Mar 17 18:44:32.879409 kubelet[1951]: E0317 18:44:32.879030 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:44:32.890782 env[1205]: time="2025-03-17T18:44:32.890721037Z" level=info msg="CreateContainer within sandbox \"0ca18633ef6db7ade9088b040039e3ec65dfbc9c16b5eb6310daedbd7c30d19f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:44:32.923441 env[1205]: time="2025-03-17T18:44:32.923354609Z" level=info msg="CreateContainer within sandbox \"0ca18633ef6db7ade9088b040039e3ec65dfbc9c16b5eb6310daedbd7c30d19f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7131e1f8dbd25d9dc47a83bd80f1a88d00f795b70a47e7ca73e7aaac63cd4a8d\"" Mar 17 18:44:32.927481 env[1205]: time="2025-03-17T18:44:32.927412935Z" level=info msg="StartContainer for \"7131e1f8dbd25d9dc47a83bd80f1a88d00f795b70a47e7ca73e7aaac63cd4a8d\"" Mar 17 18:44:33.003833 systemd[1]: Started cri-containerd-7131e1f8dbd25d9dc47a83bd80f1a88d00f795b70a47e7ca73e7aaac63cd4a8d.scope. Mar 17 18:44:33.037635 systemd[1]: cri-containerd-7131e1f8dbd25d9dc47a83bd80f1a88d00f795b70a47e7ca73e7aaac63cd4a8d.scope: Deactivated successfully. Mar 17 18:44:33.068409 env[1205]: time="2025-03-17T18:44:33.068299561Z" level=info msg="shim disconnected" id=7131e1f8dbd25d9dc47a83bd80f1a88d00f795b70a47e7ca73e7aaac63cd4a8d Mar 17 18:44:33.068409 env[1205]: time="2025-03-17T18:44:33.068418857Z" level=warning msg="cleaning up after shim disconnected" id=7131e1f8dbd25d9dc47a83bd80f1a88d00f795b70a47e7ca73e7aaac63cd4a8d namespace=k8s.io Mar 17 18:44:33.068970 env[1205]: time="2025-03-17T18:44:33.068432393Z" level=info msg="cleaning up dead shim" Mar 17 18:44:33.096209 env[1205]: time="2025-03-17T18:44:33.095982977Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3958 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:44:33Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7131e1f8dbd25d9dc47a83bd80f1a88d00f795b70a47e7ca73e7aaac63cd4a8d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 18:44:33.100199 env[1205]: time="2025-03-17T18:44:33.096488463Z" level=error msg="copy shim log" error="read /proc/self/fd/45: file already closed" Mar 17 18:44:33.100657 env[1205]: time="2025-03-17T18:44:33.100565414Z" level=error msg="Failed to pipe stdout of container \"7131e1f8dbd25d9dc47a83bd80f1a88d00f795b70a47e7ca73e7aaac63cd4a8d\"" error="reading from a closed fifo" Mar 17 18:44:33.100972 env[1205]: time="2025-03-17T18:44:33.100879976Z" level=error msg="Failed to pipe stderr of container \"7131e1f8dbd25d9dc47a83bd80f1a88d00f795b70a47e7ca73e7aaac63cd4a8d\"" error="reading from a closed fifo" Mar 17 18:44:33.107244 env[1205]: time="2025-03-17T18:44:33.105826254Z" level=error msg="StartContainer for \"7131e1f8dbd25d9dc47a83bd80f1a88d00f795b70a47e7ca73e7aaac63cd4a8d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 18:44:33.107617 kubelet[1951]: E0317 18:44:33.106561 1951 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7131e1f8dbd25d9dc47a83bd80f1a88d00f795b70a47e7ca73e7aaac63cd4a8d" Mar 17 18:44:33.124244 kubelet[1951]: E0317 18:44:33.124134 1951 kuberuntime_manager.go:1272] "Unhandled Error" err=< Mar 17 18:44:33.124244 kubelet[1951]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 18:44:33.124244 kubelet[1951]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 18:44:33.124244 kubelet[1951]: rm /hostbin/cilium-mount Mar 17 18:44:33.124576 kubelet[1951]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glljb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-tr4vt_kube-system(d7cf0f29-37ee-445c-a8b3-33708fa2ccf9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 18:44:33.124576 kubelet[1951]: > logger="UnhandledError" Mar 17 18:44:33.126099 kubelet[1951]: E0317 18:44:33.126007 1951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-tr4vt" podUID="d7cf0f29-37ee-445c-a8b3-33708fa2ccf9" Mar 17 18:44:33.243455 systemd[1]: Started sshd@30-134.199.210.114:22-185.247.137.206:40349.service. Mar 17 18:44:33.590935 env[1205]: time="2025-03-17T18:44:33.590864282Z" level=info msg="StopPodSandbox for \"0ca18633ef6db7ade9088b040039e3ec65dfbc9c16b5eb6310daedbd7c30d19f\"" Mar 17 18:44:33.595730 env[1205]: time="2025-03-17T18:44:33.590959121Z" level=info msg="Container to stop \"7131e1f8dbd25d9dc47a83bd80f1a88d00f795b70a47e7ca73e7aaac63cd4a8d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:44:33.595375 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0ca18633ef6db7ade9088b040039e3ec65dfbc9c16b5eb6310daedbd7c30d19f-shm.mount: Deactivated successfully. Mar 17 18:44:33.641598 systemd[1]: cri-containerd-0ca18633ef6db7ade9088b040039e3ec65dfbc9c16b5eb6310daedbd7c30d19f.scope: Deactivated successfully. Mar 17 18:44:33.687932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ca18633ef6db7ade9088b040039e3ec65dfbc9c16b5eb6310daedbd7c30d19f-rootfs.mount: Deactivated successfully. Mar 17 18:44:33.696913 env[1205]: time="2025-03-17T18:44:33.696829916Z" level=info msg="shim disconnected" id=0ca18633ef6db7ade9088b040039e3ec65dfbc9c16b5eb6310daedbd7c30d19f Mar 17 18:44:33.698129 env[1205]: time="2025-03-17T18:44:33.698063739Z" level=warning msg="cleaning up after shim disconnected" id=0ca18633ef6db7ade9088b040039e3ec65dfbc9c16b5eb6310daedbd7c30d19f namespace=k8s.io Mar 17 18:44:33.698416 env[1205]: time="2025-03-17T18:44:33.698386774Z" level=info msg="cleaning up dead shim" Mar 17 18:44:33.725052 env[1205]: time="2025-03-17T18:44:33.724977946Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3992 runtime=io.containerd.runc.v2\n" Mar 17 18:44:33.726269 env[1205]: time="2025-03-17T18:44:33.726197953Z" level=info msg="TearDown network for sandbox \"0ca18633ef6db7ade9088b040039e3ec65dfbc9c16b5eb6310daedbd7c30d19f\" successfully" Mar 17 18:44:33.726545 env[1205]: time="2025-03-17T18:44:33.726497857Z" level=info msg="StopPodSandbox for \"0ca18633ef6db7ade9088b040039e3ec65dfbc9c16b5eb6310daedbd7c30d19f\" returns successfully" Mar 17 18:44:33.819933 kubelet[1951]: I0317 18:44:33.819861 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-cilium-cgroup\") pod \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " Mar 17 18:44:33.820838 kubelet[1951]: I0317 18:44:33.820799 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-etc-cni-netd\") pod \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " Mar 17 18:44:33.821007 kubelet[1951]: I0317 18:44:33.820988 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-cni-path\") pod \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " Mar 17 18:44:33.821169 kubelet[1951]: I0317 18:44:33.821128 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-bpf-maps\") pod \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " Mar 17 18:44:33.821325 kubelet[1951]: I0317 18:44:33.821304 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-hostproc\") pod \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " Mar 17 18:44:33.821658 kubelet[1951]: I0317 18:44:33.821623 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-host-proc-sys-net\") pod \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " Mar 17 18:44:33.821830 kubelet[1951]: I0317 18:44:33.821809 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-lib-modules\") pod \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " Mar 17 18:44:33.822708 kubelet[1951]: I0317 18:44:33.822666 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-hubble-tls\") pod \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " Mar 17 18:44:33.822906 kubelet[1951]: I0317 18:44:33.822871 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-cilium-config-path\") pod \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " Mar 17 18:44:33.823070 kubelet[1951]: I0317 18:44:33.823048 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-xtables-lock\") pod \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " Mar 17 18:44:33.823262 kubelet[1951]: I0317 18:44:33.823239 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-host-proc-sys-kernel\") pod \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " Mar 17 18:44:33.823470 kubelet[1951]: I0317 18:44:33.823436 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-cilium-ipsec-secrets\") pod \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " Mar 17 18:44:33.823624 kubelet[1951]: I0317 18:44:33.823601 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glljb\" (UniqueName: \"kubernetes.io/projected/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-kube-api-access-glljb\") pod \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " Mar 17 18:44:33.823797 kubelet[1951]: I0317 18:44:33.823772 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-cilium-run\") pod \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " Mar 17 18:44:33.823964 kubelet[1951]: I0317 18:44:33.823942 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-clustermesh-secrets\") pod \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\" (UID: \"d7cf0f29-37ee-445c-a8b3-33708fa2ccf9\") " Mar 17 18:44:33.824913 kubelet[1951]: I0317 18:44:33.824858 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9" (UID: "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:33.825208 kubelet[1951]: I0317 18:44:33.825176 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9" (UID: "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:33.833195 systemd[1]: var-lib-kubelet-pods-d7cf0f29\x2d37ee\x2d445c\x2da8b3\x2d33708fa2ccf9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:44:33.842441 systemd[1]: var-lib-kubelet-pods-d7cf0f29\x2d37ee\x2d445c\x2da8b3\x2d33708fa2ccf9-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:44:33.850657 kubelet[1951]: I0317 18:44:33.825446 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9" (UID: "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:33.850657 kubelet[1951]: I0317 18:44:33.825471 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-cni-path" (OuterVolumeSpecName: "cni-path") pod "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9" (UID: "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:33.850657 kubelet[1951]: I0317 18:44:33.825490 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9" (UID: "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:33.850657 kubelet[1951]: I0317 18:44:33.825670 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-hostproc" (OuterVolumeSpecName: "hostproc") pod "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9" (UID: "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:33.850657 kubelet[1951]: I0317 18:44:33.825690 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9" (UID: "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:33.850657 kubelet[1951]: I0317 18:44:33.825739 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9" (UID: "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:33.850657 kubelet[1951]: I0317 18:44:33.838412 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9" (UID: "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:33.850657 kubelet[1951]: I0317 18:44:33.838582 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9" (UID: "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:44:33.851318 kubelet[1951]: I0317 18:44:33.848346 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9" (UID: "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:33.851318 kubelet[1951]: I0317 18:44:33.848956 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9" (UID: "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:44:33.851318 kubelet[1951]: I0317 18:44:33.850555 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9" (UID: "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:44:33.853741 kubelet[1951]: I0317 18:44:33.853678 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-kube-api-access-glljb" (OuterVolumeSpecName: "kube-api-access-glljb") pod "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9" (UID: "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9"). InnerVolumeSpecName "kube-api-access-glljb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:44:33.857119 kubelet[1951]: I0317 18:44:33.857051 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9" (UID: "d7cf0f29-37ee-445c-a8b3-33708fa2ccf9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:44:33.924848 kubelet[1951]: I0317 18:44:33.924782 1951 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-lib-modules\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:33.924848 kubelet[1951]: I0317 18:44:33.924830 1951 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-hubble-tls\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:33.924848 kubelet[1951]: I0317 18:44:33.924842 1951 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-cilium-config-path\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:33.924848 kubelet[1951]: I0317 18:44:33.924855 1951 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-xtables-lock\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:33.925316 kubelet[1951]: I0317 18:44:33.924872 1951 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-cilium-ipsec-secrets\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:33.925316 kubelet[1951]: I0317 18:44:33.924888 1951 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-glljb\" (UniqueName: \"kubernetes.io/projected/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-kube-api-access-glljb\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:33.925316 kubelet[1951]: I0317 18:44:33.924901 1951 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-cilium-run\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:33.925316 kubelet[1951]: I0317 18:44:33.924914 1951 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-clustermesh-secrets\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:33.925316 kubelet[1951]: I0317 18:44:33.924927 1951 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-host-proc-sys-kernel\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:33.925316 kubelet[1951]: I0317 18:44:33.924940 1951 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-cilium-cgroup\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:33.925316 kubelet[1951]: I0317 18:44:33.924954 1951 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-etc-cni-netd\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:33.925316 kubelet[1951]: I0317 18:44:33.924966 1951 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-bpf-maps\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:33.925316 kubelet[1951]: I0317 18:44:33.924978 1951 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-hostproc\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:33.925316 kubelet[1951]: I0317 18:44:33.924990 1951 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-host-proc-sys-net\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:33.925316 kubelet[1951]: I0317 18:44:33.925008 1951 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9-cni-path\") on node \"ci-3510.3.7-d-b51ee9817d\" DevicePath \"\"" Mar 17 18:44:34.431619 kubelet[1951]: I0317 18:44:34.431579 1951 scope.go:117] "RemoveContainer" containerID="7131e1f8dbd25d9dc47a83bd80f1a88d00f795b70a47e7ca73e7aaac63cd4a8d" Mar 17 18:44:34.436048 env[1205]: time="2025-03-17T18:44:34.435640985Z" level=info msg="RemoveContainer for \"7131e1f8dbd25d9dc47a83bd80f1a88d00f795b70a47e7ca73e7aaac63cd4a8d\"" Mar 17 18:44:34.439986 env[1205]: time="2025-03-17T18:44:34.439906250Z" level=info msg="RemoveContainer for \"7131e1f8dbd25d9dc47a83bd80f1a88d00f795b70a47e7ca73e7aaac63cd4a8d\" returns successfully" Mar 17 18:44:34.442737 env[1205]: time="2025-03-17T18:44:34.442665173Z" level=info msg="StopPodSandbox for \"92867cd1c69d2b9669797580d2a16b68a8f3675f8ef6f3572b9b3ee18e418181\"" Mar 17 18:44:34.443315 env[1205]: time="2025-03-17T18:44:34.443232764Z" level=info msg="TearDown network for sandbox \"92867cd1c69d2b9669797580d2a16b68a8f3675f8ef6f3572b9b3ee18e418181\" successfully" Mar 17 18:44:34.443545 env[1205]: time="2025-03-17T18:44:34.443508731Z" level=info msg="StopPodSandbox for \"92867cd1c69d2b9669797580d2a16b68a8f3675f8ef6f3572b9b3ee18e418181\" returns successfully" Mar 17 18:44:34.445272 env[1205]: time="2025-03-17T18:44:34.445211265Z" level=info msg="RemovePodSandbox for \"92867cd1c69d2b9669797580d2a16b68a8f3675f8ef6f3572b9b3ee18e418181\"" Mar 17 18:44:34.445700 env[1205]: time="2025-03-17T18:44:34.445615489Z" level=info msg="Forcibly stopping sandbox \"92867cd1c69d2b9669797580d2a16b68a8f3675f8ef6f3572b9b3ee18e418181\"" Mar 17 18:44:34.446008 env[1205]: time="2025-03-17T18:44:34.445966914Z" level=info msg="TearDown network for sandbox \"92867cd1c69d2b9669797580d2a16b68a8f3675f8ef6f3572b9b3ee18e418181\" successfully" Mar 17 18:44:34.450613 env[1205]: time="2025-03-17T18:44:34.450480090Z" level=info msg="RemovePodSandbox \"92867cd1c69d2b9669797580d2a16b68a8f3675f8ef6f3572b9b3ee18e418181\" returns successfully" Mar 17 18:44:34.451772 env[1205]: time="2025-03-17T18:44:34.451712527Z" level=info msg="StopPodSandbox for \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\"" Mar 17 18:44:34.452327 env[1205]: time="2025-03-17T18:44:34.452228400Z" level=info msg="TearDown network for sandbox \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\" successfully" Mar 17 18:44:34.452544 env[1205]: time="2025-03-17T18:44:34.452504539Z" level=info msg="StopPodSandbox for \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\" returns successfully" Mar 17 18:44:34.453917 env[1205]: time="2025-03-17T18:44:34.453865086Z" level=info msg="RemovePodSandbox for \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\"" Mar 17 18:44:34.454374 env[1205]: time="2025-03-17T18:44:34.454301164Z" level=info msg="Forcibly stopping sandbox \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\"" Mar 17 18:44:34.454902 env[1205]: time="2025-03-17T18:44:34.454648478Z" level=info msg="TearDown network for sandbox \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\" successfully" Mar 17 18:44:34.459674 env[1205]: time="2025-03-17T18:44:34.459551420Z" level=info msg="RemovePodSandbox \"6f1f4a2327bc2166b748aade454c55e505a8c62ee91b33847937788d8b6387d5\" returns successfully" Mar 17 18:44:34.462037 env[1205]: time="2025-03-17T18:44:34.461980296Z" level=info msg="StopPodSandbox for \"0ca18633ef6db7ade9088b040039e3ec65dfbc9c16b5eb6310daedbd7c30d19f\"" Mar 17 18:44:34.462573 env[1205]: time="2025-03-17T18:44:34.462478393Z" level=info msg="TearDown network for sandbox \"0ca18633ef6db7ade9088b040039e3ec65dfbc9c16b5eb6310daedbd7c30d19f\" successfully" Mar 17 18:44:34.462737 env[1205]: time="2025-03-17T18:44:34.462708655Z" level=info msg="StopPodSandbox for \"0ca18633ef6db7ade9088b040039e3ec65dfbc9c16b5eb6310daedbd7c30d19f\" returns successfully" Mar 17 18:44:34.465243 env[1205]: time="2025-03-17T18:44:34.465052303Z" level=info msg="RemovePodSandbox for \"0ca18633ef6db7ade9088b040039e3ec65dfbc9c16b5eb6310daedbd7c30d19f\"" Mar 17 18:44:34.465243 env[1205]: time="2025-03-17T18:44:34.465111521Z" level=info msg="Forcibly stopping sandbox \"0ca18633ef6db7ade9088b040039e3ec65dfbc9c16b5eb6310daedbd7c30d19f\"" Mar 17 18:44:34.466965 env[1205]: time="2025-03-17T18:44:34.465280141Z" level=info msg="TearDown network for sandbox \"0ca18633ef6db7ade9088b040039e3ec65dfbc9c16b5eb6310daedbd7c30d19f\" successfully" Mar 17 18:44:34.469627 env[1205]: time="2025-03-17T18:44:34.469549415Z" level=info msg="RemovePodSandbox \"0ca18633ef6db7ade9088b040039e3ec65dfbc9c16b5eb6310daedbd7c30d19f\" returns successfully" Mar 17 18:44:34.481646 systemd[1]: Removed slice kubepods-burstable-podd7cf0f29_37ee_445c_a8b3_33708fa2ccf9.slice. Mar 17 18:44:34.502380 systemd[1]: var-lib-kubelet-pods-d7cf0f29\x2d37ee\x2d445c\x2da8b3\x2d33708fa2ccf9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dglljb.mount: Deactivated successfully. Mar 17 18:44:34.502776 systemd[1]: var-lib-kubelet-pods-d7cf0f29\x2d37ee\x2d445c\x2da8b3\x2d33708fa2ccf9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:44:34.677943 kubelet[1951]: E0317 18:44:34.677888 1951 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:44:34.778449 kubelet[1951]: E0317 18:44:34.778278 1951 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d7cf0f29-37ee-445c-a8b3-33708fa2ccf9" containerName="mount-cgroup" Mar 17 18:44:34.778788 kubelet[1951]: I0317 18:44:34.778759 1951 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7cf0f29-37ee-445c-a8b3-33708fa2ccf9" containerName="mount-cgroup" Mar 17 18:44:34.790610 systemd[1]: Created slice kubepods-burstable-pod8710ddbf_961f_4891_8360_c3b5dd8b1cbd.slice. Mar 17 18:44:34.832691 kubelet[1951]: I0317 18:44:34.832623 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8710ddbf-961f-4891-8360-c3b5dd8b1cbd-hostproc\") pod \"cilium-cbcq4\" (UID: \"8710ddbf-961f-4891-8360-c3b5dd8b1cbd\") " pod="kube-system/cilium-cbcq4" Mar 17 18:44:34.833553 kubelet[1951]: I0317 18:44:34.833505 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8710ddbf-961f-4891-8360-c3b5dd8b1cbd-clustermesh-secrets\") pod \"cilium-cbcq4\" (UID: \"8710ddbf-961f-4891-8360-c3b5dd8b1cbd\") " pod="kube-system/cilium-cbcq4" Mar 17 18:44:34.833846 kubelet[1951]: I0317 18:44:34.833816 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8710ddbf-961f-4891-8360-c3b5dd8b1cbd-host-proc-sys-net\") pod \"cilium-cbcq4\" (UID: \"8710ddbf-961f-4891-8360-c3b5dd8b1cbd\") " pod="kube-system/cilium-cbcq4" Mar 17 18:44:34.834081 kubelet[1951]: I0317 18:44:34.834056 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8710ddbf-961f-4891-8360-c3b5dd8b1cbd-bpf-maps\") pod \"cilium-cbcq4\" (UID: \"8710ddbf-961f-4891-8360-c3b5dd8b1cbd\") " pod="kube-system/cilium-cbcq4" Mar 17 18:44:34.834282 kubelet[1951]: I0317 18:44:34.834257 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8710ddbf-961f-4891-8360-c3b5dd8b1cbd-cilium-run\") pod \"cilium-cbcq4\" (UID: \"8710ddbf-961f-4891-8360-c3b5dd8b1cbd\") " pod="kube-system/cilium-cbcq4" Mar 17 18:44:34.834467 kubelet[1951]: I0317 18:44:34.834443 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8710ddbf-961f-4891-8360-c3b5dd8b1cbd-host-proc-sys-kernel\") pod \"cilium-cbcq4\" (UID: \"8710ddbf-961f-4891-8360-c3b5dd8b1cbd\") " pod="kube-system/cilium-cbcq4" Mar 17 18:44:34.834617 kubelet[1951]: I0317 18:44:34.834596 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8710ddbf-961f-4891-8360-c3b5dd8b1cbd-cilium-cgroup\") pod \"cilium-cbcq4\" (UID: \"8710ddbf-961f-4891-8360-c3b5dd8b1cbd\") " pod="kube-system/cilium-cbcq4" Mar 17 18:44:34.834754 kubelet[1951]: I0317 18:44:34.834732 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8710ddbf-961f-4891-8360-c3b5dd8b1cbd-cilium-ipsec-secrets\") pod \"cilium-cbcq4\" (UID: \"8710ddbf-961f-4891-8360-c3b5dd8b1cbd\") " pod="kube-system/cilium-cbcq4" Mar 17 18:44:34.834902 kubelet[1951]: I0317 18:44:34.834879 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wscss\" (UniqueName: \"kubernetes.io/projected/8710ddbf-961f-4891-8360-c3b5dd8b1cbd-kube-api-access-wscss\") pod \"cilium-cbcq4\" (UID: \"8710ddbf-961f-4891-8360-c3b5dd8b1cbd\") " pod="kube-system/cilium-cbcq4" Mar 17 18:44:34.835044 kubelet[1951]: I0317 18:44:34.835016 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8710ddbf-961f-4891-8360-c3b5dd8b1cbd-cilium-config-path\") pod \"cilium-cbcq4\" (UID: \"8710ddbf-961f-4891-8360-c3b5dd8b1cbd\") " pod="kube-system/cilium-cbcq4" Mar 17 18:44:34.835213 kubelet[1951]: I0317 18:44:34.835189 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8710ddbf-961f-4891-8360-c3b5dd8b1cbd-cni-path\") pod \"cilium-cbcq4\" (UID: \"8710ddbf-961f-4891-8360-c3b5dd8b1cbd\") " pod="kube-system/cilium-cbcq4" Mar 17 18:44:34.835370 kubelet[1951]: I0317 18:44:34.835348 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8710ddbf-961f-4891-8360-c3b5dd8b1cbd-lib-modules\") pod \"cilium-cbcq4\" (UID: \"8710ddbf-961f-4891-8360-c3b5dd8b1cbd\") " pod="kube-system/cilium-cbcq4" Mar 17 18:44:34.835531 kubelet[1951]: I0317 18:44:34.835509 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8710ddbf-961f-4891-8360-c3b5dd8b1cbd-etc-cni-netd\") pod \"cilium-cbcq4\" (UID: \"8710ddbf-961f-4891-8360-c3b5dd8b1cbd\") " pod="kube-system/cilium-cbcq4" Mar 17 18:44:34.835956 kubelet[1951]: I0317 18:44:34.835925 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8710ddbf-961f-4891-8360-c3b5dd8b1cbd-xtables-lock\") pod \"cilium-cbcq4\" (UID: \"8710ddbf-961f-4891-8360-c3b5dd8b1cbd\") " pod="kube-system/cilium-cbcq4" Mar 17 18:44:34.836169 kubelet[1951]: I0317 18:44:34.836126 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8710ddbf-961f-4891-8360-c3b5dd8b1cbd-hubble-tls\") pod \"cilium-cbcq4\" (UID: \"8710ddbf-961f-4891-8360-c3b5dd8b1cbd\") " pod="kube-system/cilium-cbcq4" Mar 17 18:44:35.100097 kubelet[1951]: E0317 18:44:35.098871 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:44:35.104941 env[1205]: time="2025-03-17T18:44:35.104325483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cbcq4,Uid:8710ddbf-961f-4891-8360-c3b5dd8b1cbd,Namespace:kube-system,Attempt:0,}" Mar 17 18:44:35.134839 env[1205]: time="2025-03-17T18:44:35.134063337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:44:35.134839 env[1205]: time="2025-03-17T18:44:35.134137312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:44:35.134839 env[1205]: time="2025-03-17T18:44:35.134289129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:44:35.139925 env[1205]: time="2025-03-17T18:44:35.139387544Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/414e0d3825786390cea0d4aa29978c1316c7b233845c19fac61af38e7ce7617b pid=4025 runtime=io.containerd.runc.v2 Mar 17 18:44:35.165172 systemd[1]: Started cri-containerd-414e0d3825786390cea0d4aa29978c1316c7b233845c19fac61af38e7ce7617b.scope. Mar 17 18:44:35.230491 sshd[3972]: kex_exchange_identification: Connection closed by remote host Mar 17 18:44:35.237864 sshd[3972]: Connection closed by 185.247.137.206 port 40349 Mar 17 18:44:35.236247 systemd[1]: sshd@30-134.199.210.114:22-185.247.137.206:40349.service: Deactivated successfully. Mar 17 18:44:35.255495 env[1205]: time="2025-03-17T18:44:35.255415782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cbcq4,Uid:8710ddbf-961f-4891-8360-c3b5dd8b1cbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"414e0d3825786390cea0d4aa29978c1316c7b233845c19fac61af38e7ce7617b\"" Mar 17 18:44:35.258596 kubelet[1951]: E0317 18:44:35.257507 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:44:35.270294 env[1205]: time="2025-03-17T18:44:35.270185591Z" level=info msg="CreateContainer within sandbox \"414e0d3825786390cea0d4aa29978c1316c7b233845c19fac61af38e7ce7617b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:44:35.294752 env[1205]: time="2025-03-17T18:44:35.294670151Z" level=info msg="CreateContainer within sandbox \"414e0d3825786390cea0d4aa29978c1316c7b233845c19fac61af38e7ce7617b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"768ee9c9946da117600026b4eae13a7a901c3869e88e80ce7f830c14b4b548d4\"" Mar 17 18:44:35.298387 env[1205]: time="2025-03-17T18:44:35.298307846Z" level=info msg="StartContainer for \"768ee9c9946da117600026b4eae13a7a901c3869e88e80ce7f830c14b4b548d4\"" Mar 17 18:44:35.328501 systemd[1]: Started cri-containerd-768ee9c9946da117600026b4eae13a7a901c3869e88e80ce7f830c14b4b548d4.scope. Mar 17 18:44:35.371958 systemd[1]: Started sshd@31-134.199.210.114:22-185.247.137.206:40453.service. Mar 17 18:44:35.409018 env[1205]: time="2025-03-17T18:44:35.408941212Z" level=info msg="StartContainer for \"768ee9c9946da117600026b4eae13a7a901c3869e88e80ce7f830c14b4b548d4\" returns successfully" Mar 17 18:44:35.427675 systemd[1]: cri-containerd-768ee9c9946da117600026b4eae13a7a901c3869e88e80ce7f830c14b4b548d4.scope: Deactivated successfully. Mar 17 18:44:35.478084 kubelet[1951]: E0317 18:44:35.472552 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:44:35.486501 env[1205]: time="2025-03-17T18:44:35.486413210Z" level=info msg="shim disconnected" id=768ee9c9946da117600026b4eae13a7a901c3869e88e80ce7f830c14b4b548d4 Mar 17 18:44:35.486935 env[1205]: time="2025-03-17T18:44:35.486529220Z" level=warning msg="cleaning up after shim disconnected" id=768ee9c9946da117600026b4eae13a7a901c3869e88e80ce7f830c14b4b548d4 namespace=k8s.io Mar 17 18:44:35.486935 env[1205]: time="2025-03-17T18:44:35.486552550Z" level=info msg="cleaning up dead shim" Mar 17 18:44:35.511134 env[1205]: time="2025-03-17T18:44:35.511025514Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4115 runtime=io.containerd.runc.v2\n" Mar 17 18:44:35.616340 kubelet[1951]: E0317 18:44:35.616283 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:44:35.642662 env[1205]: time="2025-03-17T18:44:35.642487134Z" level=info msg="CreateContainer within sandbox \"414e0d3825786390cea0d4aa29978c1316c7b233845c19fac61af38e7ce7617b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:44:35.683129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4221456937.mount: Deactivated successfully. Mar 17 18:44:35.707295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2350597710.mount: Deactivated successfully. Mar 17 18:44:35.708363 env[1205]: time="2025-03-17T18:44:35.708295724Z" level=info msg="CreateContainer within sandbox \"414e0d3825786390cea0d4aa29978c1316c7b233845c19fac61af38e7ce7617b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1ffd04d666a1806e323e1c1346c8c8abdb06d166f8e4f91d540f2af97e2b088e\"" Mar 17 18:44:35.717586 env[1205]: time="2025-03-17T18:44:35.717466467Z" level=info msg="StartContainer for \"1ffd04d666a1806e323e1c1346c8c8abdb06d166f8e4f91d540f2af97e2b088e\"" Mar 17 18:44:35.751918 systemd[1]: Started cri-containerd-1ffd04d666a1806e323e1c1346c8c8abdb06d166f8e4f91d540f2af97e2b088e.scope. Mar 17 18:44:35.810941 sshd[4088]: Connection closed by 185.247.137.206 port 40453 [preauth] Mar 17 18:44:35.809846 systemd[1]: sshd@31-134.199.210.114:22-185.247.137.206:40453.service: Deactivated successfully. Mar 17 18:44:35.848749 env[1205]: time="2025-03-17T18:44:35.848666458Z" level=info msg="StartContainer for \"1ffd04d666a1806e323e1c1346c8c8abdb06d166f8e4f91d540f2af97e2b088e\" returns successfully" Mar 17 18:44:35.866203 systemd[1]: cri-containerd-1ffd04d666a1806e323e1c1346c8c8abdb06d166f8e4f91d540f2af97e2b088e.scope: Deactivated successfully. Mar 17 18:44:35.913781 env[1205]: time="2025-03-17T18:44:35.913336301Z" level=info msg="shim disconnected" id=1ffd04d666a1806e323e1c1346c8c8abdb06d166f8e4f91d540f2af97e2b088e Mar 17 18:44:35.913781 env[1205]: time="2025-03-17T18:44:35.913413447Z" level=warning msg="cleaning up after shim disconnected" id=1ffd04d666a1806e323e1c1346c8c8abdb06d166f8e4f91d540f2af97e2b088e namespace=k8s.io Mar 17 18:44:35.913781 env[1205]: time="2025-03-17T18:44:35.913586823Z" level=info msg="cleaning up dead shim" Mar 17 18:44:35.933996 env[1205]: time="2025-03-17T18:44:35.933912684Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4180 runtime=io.containerd.runc.v2\n" Mar 17 18:44:36.201337 kubelet[1951]: W0317 18:44:36.198978 1951 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7cf0f29_37ee_445c_a8b3_33708fa2ccf9.slice/cri-containerd-7131e1f8dbd25d9dc47a83bd80f1a88d00f795b70a47e7ca73e7aaac63cd4a8d.scope WatchSource:0}: container "7131e1f8dbd25d9dc47a83bd80f1a88d00f795b70a47e7ca73e7aaac63cd4a8d" in namespace "k8s.io": not found Mar 17 18:44:36.483225 kubelet[1951]: I0317 18:44:36.482990 1951 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7cf0f29-37ee-445c-a8b3-33708fa2ccf9" path="/var/lib/kubelet/pods/d7cf0f29-37ee-445c-a8b3-33708fa2ccf9/volumes" Mar 17 18:44:36.626489 kubelet[1951]: E0317 18:44:36.626440 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:44:36.631872 env[1205]: time="2025-03-17T18:44:36.631793146Z" level=info msg="CreateContainer within sandbox \"414e0d3825786390cea0d4aa29978c1316c7b233845c19fac61af38e7ce7617b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:44:36.660224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2705564830.mount: Deactivated successfully. Mar 17 18:44:36.679518 env[1205]: time="2025-03-17T18:44:36.679411091Z" level=info msg="CreateContainer within sandbox \"414e0d3825786390cea0d4aa29978c1316c7b233845c19fac61af38e7ce7617b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9141403f775515d4abeb95839315835104d7bbc810083c18dacba1a269d3ceeb\"" Mar 17 18:44:36.680579 env[1205]: time="2025-03-17T18:44:36.680472677Z" level=info msg="StartContainer for \"9141403f775515d4abeb95839315835104d7bbc810083c18dacba1a269d3ceeb\"" Mar 17 18:44:36.760775 systemd[1]: Started cri-containerd-9141403f775515d4abeb95839315835104d7bbc810083c18dacba1a269d3ceeb.scope. Mar 17 18:44:36.842893 env[1205]: time="2025-03-17T18:44:36.842809495Z" level=info msg="StartContainer for \"9141403f775515d4abeb95839315835104d7bbc810083c18dacba1a269d3ceeb\" returns successfully" Mar 17 18:44:36.862452 systemd[1]: cri-containerd-9141403f775515d4abeb95839315835104d7bbc810083c18dacba1a269d3ceeb.scope: Deactivated successfully. Mar 17 18:44:36.926951 env[1205]: time="2025-03-17T18:44:36.926875860Z" level=info msg="shim disconnected" id=9141403f775515d4abeb95839315835104d7bbc810083c18dacba1a269d3ceeb Mar 17 18:44:36.927394 env[1205]: time="2025-03-17T18:44:36.927350698Z" level=warning msg="cleaning up after shim disconnected" id=9141403f775515d4abeb95839315835104d7bbc810083c18dacba1a269d3ceeb namespace=k8s.io Mar 17 18:44:36.927534 env[1205]: time="2025-03-17T18:44:36.927512076Z" level=info msg="cleaning up dead shim" Mar 17 18:44:36.946896 env[1205]: time="2025-03-17T18:44:36.945268606Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4239 runtime=io.containerd.runc.v2\n" Mar 17 18:44:37.505760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9141403f775515d4abeb95839315835104d7bbc810083c18dacba1a269d3ceeb-rootfs.mount: Deactivated successfully. Mar 17 18:44:37.632131 kubelet[1951]: E0317 18:44:37.632020 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:44:37.636006 env[1205]: time="2025-03-17T18:44:37.635931411Z" level=info msg="CreateContainer within sandbox \"414e0d3825786390cea0d4aa29978c1316c7b233845c19fac61af38e7ce7617b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:44:37.681594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount141706547.mount: Deactivated successfully. Mar 17 18:44:37.687674 env[1205]: time="2025-03-17T18:44:37.687589116Z" level=info msg="CreateContainer within sandbox \"414e0d3825786390cea0d4aa29978c1316c7b233845c19fac61af38e7ce7617b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9f5168ec4d2790a2c25b0cc3de7efa3aafb5c612b274ea9122c41511bb1dcc3e\"" Mar 17 18:44:37.688924 env[1205]: time="2025-03-17T18:44:37.688862042Z" level=info msg="StartContainer for \"9f5168ec4d2790a2c25b0cc3de7efa3aafb5c612b274ea9122c41511bb1dcc3e\"" Mar 17 18:44:37.753526 systemd[1]: Started cri-containerd-9f5168ec4d2790a2c25b0cc3de7efa3aafb5c612b274ea9122c41511bb1dcc3e.scope. Mar 17 18:44:37.823560 systemd[1]: cri-containerd-9f5168ec4d2790a2c25b0cc3de7efa3aafb5c612b274ea9122c41511bb1dcc3e.scope: Deactivated successfully. Mar 17 18:44:37.828494 env[1205]: time="2025-03-17T18:44:37.827957358Z" level=info msg="StartContainer for \"9f5168ec4d2790a2c25b0cc3de7efa3aafb5c612b274ea9122c41511bb1dcc3e\" returns successfully" Mar 17 18:44:37.868850 env[1205]: time="2025-03-17T18:44:37.868752931Z" level=info msg="shim disconnected" id=9f5168ec4d2790a2c25b0cc3de7efa3aafb5c612b274ea9122c41511bb1dcc3e Mar 17 18:44:37.869705 env[1205]: time="2025-03-17T18:44:37.869633731Z" level=warning msg="cleaning up after shim disconnected" id=9f5168ec4d2790a2c25b0cc3de7efa3aafb5c612b274ea9122c41511bb1dcc3e namespace=k8s.io Mar 17 18:44:37.869980 env[1205]: time="2025-03-17T18:44:37.869933224Z" level=info msg="cleaning up dead shim" Mar 17 18:44:37.892279 env[1205]: time="2025-03-17T18:44:37.892211473Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4295 runtime=io.containerd.runc.v2\n" Mar 17 18:44:38.411319 kubelet[1951]: I0317 18:44:38.411237 1951 setters.go:600] "Node became not ready" node="ci-3510.3.7-d-b51ee9817d" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:44:38Z","lastTransitionTime":"2025-03-17T18:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:44:38.508815 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f5168ec4d2790a2c25b0cc3de7efa3aafb5c612b274ea9122c41511bb1dcc3e-rootfs.mount: Deactivated successfully. Mar 17 18:44:38.651481 kubelet[1951]: E0317 18:44:38.650117 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:44:38.672844 env[1205]: time="2025-03-17T18:44:38.672036848Z" level=info msg="CreateContainer within sandbox \"414e0d3825786390cea0d4aa29978c1316c7b233845c19fac61af38e7ce7617b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:44:38.718201 env[1205]: time="2025-03-17T18:44:38.716130921Z" level=info msg="CreateContainer within sandbox \"414e0d3825786390cea0d4aa29978c1316c7b233845c19fac61af38e7ce7617b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f24f63821436b8e13f54facb3bbf61252fb099adeb2249db6299d3a42f03fdc4\"" Mar 17 18:44:38.720291 env[1205]: time="2025-03-17T18:44:38.719481364Z" level=info msg="StartContainer for \"f24f63821436b8e13f54facb3bbf61252fb099adeb2249db6299d3a42f03fdc4\"" Mar 17 18:44:38.770445 systemd[1]: Started cri-containerd-f24f63821436b8e13f54facb3bbf61252fb099adeb2249db6299d3a42f03fdc4.scope. Mar 17 18:44:38.831727 env[1205]: time="2025-03-17T18:44:38.830999128Z" level=info msg="StartContainer for \"f24f63821436b8e13f54facb3bbf61252fb099adeb2249db6299d3a42f03fdc4\" returns successfully" Mar 17 18:44:39.359431 kubelet[1951]: W0317 18:44:39.352819 1951 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8710ddbf_961f_4891_8360_c3b5dd8b1cbd.slice/cri-containerd-768ee9c9946da117600026b4eae13a7a901c3869e88e80ce7f830c14b4b548d4.scope WatchSource:0}: task 768ee9c9946da117600026b4eae13a7a901c3869e88e80ce7f830c14b4b548d4 not found: not found Mar 17 18:44:39.658215 kubelet[1951]: E0317 18:44:39.658164 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:44:39.984429 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 18:44:41.102538 kubelet[1951]: E0317 18:44:41.102474 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:44:41.546750 systemd[1]: run-containerd-runc-k8s.io-f24f63821436b8e13f54facb3bbf61252fb099adeb2249db6299d3a42f03fdc4-runc.nujwKU.mount: Deactivated successfully. Mar 17 18:44:42.486591 kubelet[1951]: W0317 18:44:42.486524 1951 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8710ddbf_961f_4891_8360_c3b5dd8b1cbd.slice/cri-containerd-1ffd04d666a1806e323e1c1346c8c8abdb06d166f8e4f91d540f2af97e2b088e.scope WatchSource:0}: task 1ffd04d666a1806e323e1c1346c8c8abdb06d166f8e4f91d540f2af97e2b088e not found: not found Mar 17 18:44:43.882096 systemd[1]: run-containerd-runc-k8s.io-f24f63821436b8e13f54facb3bbf61252fb099adeb2249db6299d3a42f03fdc4-runc.Pi7Lhr.mount: Deactivated successfully. Mar 17 18:44:45.018101 systemd-networkd[1015]: lxc_health: Link UP Mar 17 18:44:45.030398 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:44:45.030610 systemd-networkd[1015]: lxc_health: Gained carrier Mar 17 18:44:45.103561 kubelet[1951]: E0317 18:44:45.103395 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:44:45.153969 kubelet[1951]: I0317 18:44:45.153858 1951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cbcq4" podStartSLOduration=11.153833847 podStartE2EDuration="11.153833847s" podCreationTimestamp="2025-03-17 18:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:44:39.73431528 +0000 UTC m=+185.637891334" watchObservedRunningTime="2025-03-17 18:44:45.153833847 +0000 UTC m=+191.057409984" Mar 17 18:44:45.644580 kubelet[1951]: W0317 18:44:45.644507 1951 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8710ddbf_961f_4891_8360_c3b5dd8b1cbd.slice/cri-containerd-9141403f775515d4abeb95839315835104d7bbc810083c18dacba1a269d3ceeb.scope WatchSource:0}: task 9141403f775515d4abeb95839315835104d7bbc810083c18dacba1a269d3ceeb not found: not found Mar 17 18:44:45.687699 kubelet[1951]: E0317 18:44:45.687641 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:44:46.250842 systemd[1]: run-containerd-runc-k8s.io-f24f63821436b8e13f54facb3bbf61252fb099adeb2249db6299d3a42f03fdc4-runc.gUVlHv.mount: Deactivated successfully. Mar 17 18:44:46.689424 kubelet[1951]: E0317 18:44:46.689382 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:44:46.919475 systemd-networkd[1015]: lxc_health: Gained IPv6LL Mar 17 18:44:48.472496 kubelet[1951]: E0317 18:44:48.472429 1951 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:44:48.695932 systemd[1]: run-containerd-runc-k8s.io-f24f63821436b8e13f54facb3bbf61252fb099adeb2249db6299d3a42f03fdc4-runc.XuYMeQ.mount: Deactivated successfully. Mar 17 18:44:48.754743 kubelet[1951]: W0317 18:44:48.754435 1951 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8710ddbf_961f_4891_8360_c3b5dd8b1cbd.slice/cri-containerd-9f5168ec4d2790a2c25b0cc3de7efa3aafb5c612b274ea9122c41511bb1dcc3e.scope WatchSource:0}: task 9f5168ec4d2790a2c25b0cc3de7efa3aafb5c612b274ea9122c41511bb1dcc3e not found: not found Mar 17 18:44:51.164380 systemd[1]: run-containerd-runc-k8s.io-f24f63821436b8e13f54facb3bbf61252fb099adeb2249db6299d3a42f03fdc4-runc.MZCi1r.mount: Deactivated successfully. Mar 17 18:44:51.345372 sshd[3888]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:51.351252 systemd[1]: sshd@29-134.199.210.114:22-139.178.68.195:60198.service: Deactivated successfully. Mar 17 18:44:51.353553 systemd-logind[1197]: Session 30 logged out. Waiting for processes to exit. Mar 17 18:44:51.353679 systemd[1]: session-30.scope: Deactivated successfully. Mar 17 18:44:51.355955 systemd-logind[1197]: Removed session 30.