Mar 17 18:34:04.945211 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025 Mar 17 18:34:04.945230 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:34:04.945238 kernel: BIOS-provided physical RAM map: Mar 17 18:34:04.945244 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 18:34:04.945249 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 18:34:04.945254 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 18:34:04.945261 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 17 18:34:04.945266 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 17 18:34:04.945274 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 18:34:04.945279 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 17 18:34:04.945285 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 18:34:04.945290 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 18:34:04.945295 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 18:34:04.945301 kernel: NX (Execute Disable) protection: active Mar 17 18:34:04.945309 kernel: SMBIOS 2.8 present. Mar 17 18:34:04.945315 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 17 18:34:04.945321 kernel: Hypervisor detected: KVM Mar 17 18:34:04.945326 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 18:34:04.945332 kernel: kvm-clock: cpu 0, msr 7a19a001, primary cpu clock Mar 17 18:34:04.945338 kernel: kvm-clock: using sched offset of 2835348479 cycles Mar 17 18:34:04.945344 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 18:34:04.945350 kernel: tsc: Detected 2794.750 MHz processor Mar 17 18:34:04.945357 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 18:34:04.945364 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 18:34:04.945370 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 17 18:34:04.945377 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 18:34:04.945395 kernel: Using GB pages for direct mapping Mar 17 18:34:04.945402 kernel: ACPI: Early table checksum verification disabled Mar 17 18:34:04.945408 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 17 18:34:04.945414 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:34:04.945420 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:34:04.945426 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:34:04.945433 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 17 18:34:04.945439 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:34:04.945445 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:34:04.945451 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:34:04.945457 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:34:04.945463 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Mar 17 18:34:04.945469 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Mar 17 18:34:04.945475 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 17 18:34:04.945485 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Mar 17 18:34:04.945491 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Mar 17 18:34:04.945498 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Mar 17 18:34:04.945504 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Mar 17 18:34:04.945510 kernel: No NUMA configuration found Mar 17 18:34:04.945517 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 17 18:34:04.945524 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 17 18:34:04.945531 kernel: Zone ranges: Mar 17 18:34:04.945537 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 18:34:04.945543 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 17 18:34:04.945549 kernel: Normal empty Mar 17 18:34:04.945556 kernel: Movable zone start for each node Mar 17 18:34:04.945562 kernel: Early memory node ranges Mar 17 18:34:04.945568 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 18:34:04.945575 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 17 18:34:04.945581 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 17 18:34:04.945589 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 18:34:04.945595 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 18:34:04.945602 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 17 18:34:04.945608 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 18:34:04.945614 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 18:34:04.945621 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 18:34:04.945627 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 18:34:04.945633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 18:34:04.945639 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 18:34:04.945647 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 18:34:04.945654 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 18:34:04.945660 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 18:34:04.945666 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 18:34:04.945672 kernel: TSC deadline timer available Mar 17 18:34:04.945679 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 17 18:34:04.945685 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 17 18:34:04.945691 kernel: kvm-guest: setup PV sched yield Mar 17 18:34:04.945698 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 17 18:34:04.945705 kernel: Booting paravirtualized kernel on KVM Mar 17 18:34:04.945712 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 18:34:04.945718 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Mar 17 18:34:04.945725 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Mar 17 18:34:04.945731 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Mar 17 18:34:04.945737 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 17 18:34:04.945743 kernel: kvm-guest: setup async PF for cpu 0 Mar 17 18:34:04.945750 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Mar 17 18:34:04.945756 kernel: kvm-guest: PV spinlocks enabled Mar 17 18:34:04.945764 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 18:34:04.945770 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 17 18:34:04.945776 kernel: Policy zone: DMA32 Mar 17 18:34:04.945784 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:34:04.945790 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:34:04.945797 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:34:04.945804 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:34:04.945811 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:34:04.945819 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 134796K reserved, 0K cma-reserved) Mar 17 18:34:04.945825 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 18:34:04.945832 kernel: ftrace: allocating 34580 entries in 136 pages Mar 17 18:34:04.945838 kernel: ftrace: allocated 136 pages with 2 groups Mar 17 18:34:04.945844 kernel: rcu: Hierarchical RCU implementation. Mar 17 18:34:04.945851 kernel: rcu: RCU event tracing is enabled. Mar 17 18:34:04.945858 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 18:34:04.945865 kernel: Rude variant of Tasks RCU enabled. Mar 17 18:34:04.945873 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:34:04.945882 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:34:04.945890 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 18:34:04.945896 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 17 18:34:04.945902 kernel: random: crng init done Mar 17 18:34:04.945909 kernel: Console: colour VGA+ 80x25 Mar 17 18:34:04.945915 kernel: printk: console [ttyS0] enabled Mar 17 18:34:04.945921 kernel: ACPI: Core revision 20210730 Mar 17 18:34:04.945928 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 18:34:04.945934 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 18:34:04.945940 kernel: x2apic enabled Mar 17 18:34:04.945948 kernel: Switched APIC routing to physical x2apic. Mar 17 18:34:04.945954 kernel: kvm-guest: setup PV IPIs Mar 17 18:34:04.945961 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 18:34:04.945967 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 18:34:04.945973 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Mar 17 18:34:04.945980 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 18:34:04.945986 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 17 18:34:04.945993 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 17 18:34:04.946006 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 18:34:04.946013 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 18:34:04.946019 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 18:34:04.946027 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 18:34:04.946034 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 17 18:34:04.946041 kernel: RETBleed: Mitigation: untrained return thunk Mar 17 18:34:04.946048 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 18:34:04.946054 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Mar 17 18:34:04.946061 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 18:34:04.946077 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 18:34:04.946086 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 18:34:04.946095 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 18:34:04.946104 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 18:34:04.946112 kernel: Freeing SMP alternatives memory: 32K Mar 17 18:34:04.946121 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:34:04.946130 kernel: LSM: Security Framework initializing Mar 17 18:34:04.946138 kernel: SELinux: Initializing. Mar 17 18:34:04.946149 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:34:04.946158 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:34:04.946165 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 17 18:34:04.946172 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 17 18:34:04.946179 kernel: ... version: 0 Mar 17 18:34:04.946185 kernel: ... bit width: 48 Mar 17 18:34:04.946192 kernel: ... generic registers: 6 Mar 17 18:34:04.946199 kernel: ... value mask: 0000ffffffffffff Mar 17 18:34:04.946206 kernel: ... max period: 00007fffffffffff Mar 17 18:34:04.946214 kernel: ... fixed-purpose events: 0 Mar 17 18:34:04.946221 kernel: ... event mask: 000000000000003f Mar 17 18:34:04.946228 kernel: signal: max sigframe size: 1776 Mar 17 18:34:04.946234 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:34:04.946241 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:34:04.946248 kernel: x86: Booting SMP configuration: Mar 17 18:34:04.946255 kernel: .... node #0, CPUs: #1 Mar 17 18:34:04.946261 kernel: kvm-clock: cpu 1, msr 7a19a041, secondary cpu clock Mar 17 18:34:04.946268 kernel: kvm-guest: setup async PF for cpu 1 Mar 17 18:34:04.946276 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Mar 17 18:34:04.946283 kernel: #2 Mar 17 18:34:04.946289 kernel: kvm-clock: cpu 2, msr 7a19a081, secondary cpu clock Mar 17 18:34:04.946296 kernel: kvm-guest: setup async PF for cpu 2 Mar 17 18:34:04.946303 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Mar 17 18:34:04.946310 kernel: #3 Mar 17 18:34:04.946316 kernel: kvm-clock: cpu 3, msr 7a19a0c1, secondary cpu clock Mar 17 18:34:04.946323 kernel: kvm-guest: setup async PF for cpu 3 Mar 17 18:34:04.946329 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Mar 17 18:34:04.946336 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 18:34:04.946345 kernel: smpboot: Max logical packages: 1 Mar 17 18:34:04.946352 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Mar 17 18:34:04.946358 kernel: devtmpfs: initialized Mar 17 18:34:04.946365 kernel: x86/mm: Memory block size: 128MB Mar 17 18:34:04.946372 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:34:04.946379 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 18:34:04.946395 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:34:04.946402 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:34:04.946409 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:34:04.946417 kernel: audit: type=2000 audit(1742236444.182:1): state=initialized audit_enabled=0 res=1 Mar 17 18:34:04.946424 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:34:04.946430 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 18:34:04.946437 kernel: cpuidle: using governor menu Mar 17 18:34:04.946444 kernel: ACPI: bus type PCI registered Mar 17 18:34:04.946451 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:34:04.946458 kernel: dca service started, version 1.12.1 Mar 17 18:34:04.946465 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 17 18:34:04.946472 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Mar 17 18:34:04.946480 kernel: PCI: Using configuration type 1 for base access Mar 17 18:34:04.946487 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 18:34:04.946494 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:34:04.946500 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:34:04.946507 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:34:04.946514 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:34:04.946520 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:34:04.946527 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:34:04.946534 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:34:04.946542 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:34:04.946549 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:34:04.946555 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:34:04.946562 kernel: ACPI: Interpreter enabled Mar 17 18:34:04.946569 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 18:34:04.946575 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 18:34:04.946582 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 18:34:04.946589 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 18:34:04.946596 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 18:34:04.946739 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:34:04.946812 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 17 18:34:04.946880 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 17 18:34:04.946890 kernel: PCI host bridge to bus 0000:00 Mar 17 18:34:04.946975 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 18:34:04.947039 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 18:34:04.947113 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 18:34:04.947176 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 17 18:34:04.947236 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 18:34:04.947297 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 17 18:34:04.947357 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 18:34:04.947461 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 18:34:04.947548 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 17 18:34:04.947621 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 17 18:34:04.947688 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 17 18:34:04.947753 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 17 18:34:04.947819 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 18:34:04.947905 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 18:34:04.947975 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 17 18:34:04.948054 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 17 18:34:04.948147 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 17 18:34:04.948233 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 17 18:34:04.948300 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 18:34:04.948370 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 17 18:34:04.948461 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 17 18:34:04.948545 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 18:34:04.948619 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 17 18:34:04.948686 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 17 18:34:04.948754 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 17 18:34:04.948820 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 17 18:34:04.948909 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 18:34:04.948976 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 18:34:04.949057 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 18:34:04.949157 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 17 18:34:04.949228 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 17 18:34:04.949315 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 18:34:04.949398 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 17 18:34:04.949408 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 18:34:04.949415 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 18:34:04.949422 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 18:34:04.949432 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 18:34:04.949439 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 18:34:04.949445 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 18:34:04.949452 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 18:34:04.949459 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 18:34:04.949466 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 18:34:04.949472 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 18:34:04.949479 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 18:34:04.949486 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 18:34:04.949494 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 18:34:04.949500 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 18:34:04.949507 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 18:34:04.949514 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 18:34:04.949520 kernel: iommu: Default domain type: Translated Mar 17 18:34:04.949527 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 18:34:04.949605 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 18:34:04.949672 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 18:34:04.949739 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 18:34:04.949750 kernel: vgaarb: loaded Mar 17 18:34:04.949756 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:34:04.949764 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:34:04.949775 kernel: PTP clock support registered Mar 17 18:34:04.949782 kernel: PCI: Using ACPI for IRQ routing Mar 17 18:34:04.949789 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 18:34:04.949796 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 18:34:04.949803 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 17 18:34:04.949809 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 18:34:04.949818 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 18:34:04.949824 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 18:34:04.949831 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:34:04.949838 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:34:04.949845 kernel: pnp: PnP ACPI init Mar 17 18:34:04.949931 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 17 18:34:04.949941 kernel: pnp: PnP ACPI: found 6 devices Mar 17 18:34:04.949948 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 18:34:04.949957 kernel: NET: Registered PF_INET protocol family Mar 17 18:34:04.949964 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:34:04.949971 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:34:04.949978 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:34:04.949985 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:34:04.949992 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 18:34:04.949999 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:34:04.950005 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:34:04.950012 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:34:04.950021 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:34:04.950027 kernel: NET: Registered PF_XDP protocol family Mar 17 18:34:04.950106 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 18:34:04.950177 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 18:34:04.950237 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 18:34:04.950298 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 17 18:34:04.950357 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 17 18:34:04.950434 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 17 18:34:04.950446 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:34:04.950453 kernel: Initialise system trusted keyrings Mar 17 18:34:04.950460 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:34:04.950467 kernel: Key type asymmetric registered Mar 17 18:34:04.950474 kernel: Asymmetric key parser 'x509' registered Mar 17 18:34:04.950481 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:34:04.950488 kernel: io scheduler mq-deadline registered Mar 17 18:34:04.950494 kernel: io scheduler kyber registered Mar 17 18:34:04.950501 kernel: io scheduler bfq registered Mar 17 18:34:04.950510 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 18:34:04.950517 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 18:34:04.950524 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 18:34:04.950531 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 17 18:34:04.950538 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:34:04.950545 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 18:34:04.950552 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 18:34:04.950559 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 18:34:04.950565 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 18:34:04.950643 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 18:34:04.950653 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 18:34:04.950715 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 18:34:04.950778 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T18:34:04 UTC (1742236444) Mar 17 18:34:04.950840 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 17 18:34:04.950849 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:34:04.950856 kernel: Segment Routing with IPv6 Mar 17 18:34:04.950863 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:34:04.950872 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:34:04.950879 kernel: Key type dns_resolver registered Mar 17 18:34:04.950886 kernel: IPI shorthand broadcast: enabled Mar 17 18:34:04.950893 kernel: sched_clock: Marking stable (423285144, 160891992)->(789855880, -205678744) Mar 17 18:34:04.950899 kernel: registered taskstats version 1 Mar 17 18:34:04.950906 kernel: Loading compiled-in X.509 certificates Mar 17 18:34:04.950913 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220' Mar 17 18:34:04.950920 kernel: Key type .fscrypt registered Mar 17 18:34:04.950927 kernel: Key type fscrypt-provisioning registered Mar 17 18:34:04.950935 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:34:04.950942 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:34:04.950949 kernel: ima: No architecture policies found Mar 17 18:34:04.950956 kernel: clk: Disabling unused clocks Mar 17 18:34:04.950962 kernel: Freeing unused kernel image (initmem) memory: 47472K Mar 17 18:34:04.950969 kernel: Write protecting the kernel read-only data: 28672k Mar 17 18:34:04.950976 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Mar 17 18:34:04.950983 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Mar 17 18:34:04.950990 kernel: Run /init as init process Mar 17 18:34:04.950998 kernel: with arguments: Mar 17 18:34:04.951004 kernel: /init Mar 17 18:34:04.951011 kernel: with environment: Mar 17 18:34:04.951018 kernel: HOME=/ Mar 17 18:34:04.951024 kernel: TERM=linux Mar 17 18:34:04.951031 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:34:04.951040 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:34:04.951049 systemd[1]: Detected virtualization kvm. Mar 17 18:34:04.951058 systemd[1]: Detected architecture x86-64. Mar 17 18:34:04.951065 systemd[1]: Running in initrd. Mar 17 18:34:04.951079 systemd[1]: No hostname configured, using default hostname. Mar 17 18:34:04.951086 systemd[1]: Hostname set to . Mar 17 18:34:04.951094 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:34:04.951101 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:34:04.951108 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:34:04.951115 systemd[1]: Reached target cryptsetup.target. Mar 17 18:34:04.951124 systemd[1]: Reached target paths.target. Mar 17 18:34:04.951137 systemd[1]: Reached target slices.target. Mar 17 18:34:04.951145 systemd[1]: Reached target swap.target. Mar 17 18:34:04.951152 systemd[1]: Reached target timers.target. Mar 17 18:34:04.951160 systemd[1]: Listening on iscsid.socket. Mar 17 18:34:04.951169 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:34:04.951176 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:34:04.951184 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:34:04.951191 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:34:04.951199 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:34:04.951206 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:34:04.951214 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:34:04.951221 systemd[1]: Reached target sockets.target. Mar 17 18:34:04.951229 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:34:04.951238 systemd[1]: Finished network-cleanup.service. Mar 17 18:34:04.951246 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:34:04.951254 systemd[1]: Starting systemd-journald.service... Mar 17 18:34:04.951261 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:34:04.951268 systemd[1]: Starting systemd-resolved.service... Mar 17 18:34:04.951276 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:34:04.951283 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:34:04.951290 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:34:04.951298 kernel: audit: type=1130 audit(1742236444.944:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:04.951307 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:34:04.951317 systemd-journald[198]: Journal started Mar 17 18:34:04.951357 systemd-journald[198]: Runtime Journal (/run/log/journal/e086294918f94f8a8f622bab8f8886b8) is 6.0M, max 48.5M, 42.5M free. Mar 17 18:34:04.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:04.953403 systemd[1]: Started systemd-journald.service. Mar 17 18:34:04.954481 systemd-modules-load[199]: Inserted module 'overlay' Mar 17 18:34:04.991082 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:34:04.991101 kernel: audit: type=1130 audit(1742236444.986:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:04.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:04.966780 systemd-resolved[200]: Positive Trust Anchors: Mar 17 18:34:04.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:04.966790 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:34:04.998859 kernel: audit: type=1130 audit(1742236444.991:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:04.998879 kernel: audit: type=1130 audit(1742236444.995:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:04.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:04.966815 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:34:05.002295 kernel: audit: type=1130 audit(1742236444.998:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:04.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:04.968930 systemd-resolved[200]: Defaulting to hostname 'linux'. Mar 17 18:34:04.987450 systemd[1]: Started systemd-resolved.service. Mar 17 18:34:04.991774 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:34:04.995809 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:34:04.999129 systemd[1]: Reached target nss-lookup.target. Mar 17 18:34:05.003191 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:34:05.067077 systemd-modules-load[199]: Inserted module 'br_netfilter' Mar 17 18:34:05.067923 kernel: Bridge firewalling registered Mar 17 18:34:05.074178 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:34:05.078542 kernel: audit: type=1130 audit(1742236445.074:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:05.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:05.075281 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:34:05.083209 dracut-cmdline[215]: dracut-dracut-053 Mar 17 18:34:05.085047 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:34:05.147400 kernel: SCSI subsystem initialized Mar 17 18:34:05.157399 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:34:05.157418 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:34:05.159811 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:34:05.161058 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:34:05.163786 systemd-modules-load[199]: Inserted module 'dm_multipath' Mar 17 18:34:05.165205 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:34:05.169655 kernel: audit: type=1130 audit(1742236445.165:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:05.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:05.169692 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:34:05.205407 kernel: iscsi: registered transport (tcp) Mar 17 18:34:05.206831 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:34:05.211057 kernel: audit: type=1130 audit(1742236445.206:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:05.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:05.247418 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:34:05.247468 kernel: QLogic iSCSI HBA Driver Mar 17 18:34:05.274988 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:34:05.290058 kernel: audit: type=1130 audit(1742236445.285:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:05.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:05.286275 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:34:05.344416 kernel: raid6: avx2x4 gen() 26545 MB/s Mar 17 18:34:05.361419 kernel: raid6: avx2x4 xor() 6659 MB/s Mar 17 18:34:05.382418 kernel: raid6: avx2x2 gen() 20175 MB/s Mar 17 18:34:05.399417 kernel: raid6: avx2x2 xor() 12435 MB/s Mar 17 18:34:05.416418 kernel: raid6: avx2x1 gen() 17352 MB/s Mar 17 18:34:05.476431 kernel: raid6: avx2x1 xor() 11772 MB/s Mar 17 18:34:05.493443 kernel: raid6: sse2x4 gen() 10807 MB/s Mar 17 18:34:05.522440 kernel: raid6: sse2x4 xor() 6027 MB/s Mar 17 18:34:05.582424 kernel: raid6: sse2x2 gen() 16319 MB/s Mar 17 18:34:05.599412 kernel: raid6: sse2x2 xor() 9826 MB/s Mar 17 18:34:05.631407 kernel: raid6: sse2x1 gen() 12370 MB/s Mar 17 18:34:05.648796 kernel: raid6: sse2x1 xor() 7789 MB/s Mar 17 18:34:05.648823 kernel: raid6: using algorithm avx2x4 gen() 26545 MB/s Mar 17 18:34:05.648833 kernel: raid6: .... xor() 6659 MB/s, rmw enabled Mar 17 18:34:05.649490 kernel: raid6: using avx2x2 recovery algorithm Mar 17 18:34:05.661407 kernel: xor: automatically using best checksumming function avx Mar 17 18:34:05.750426 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Mar 17 18:34:05.758466 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:34:05.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:05.760000 audit: BPF prog-id=7 op=LOAD Mar 17 18:34:05.760000 audit: BPF prog-id=8 op=LOAD Mar 17 18:34:05.760939 systemd[1]: Starting systemd-udevd.service... Mar 17 18:34:05.785109 systemd-udevd[400]: Using default interface naming scheme 'v252'. Mar 17 18:34:05.788502 systemd[1]: Started systemd-udevd.service. Mar 17 18:34:05.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:05.790628 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:34:05.802456 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Mar 17 18:34:05.824090 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:34:05.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:05.832401 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:34:05.864520 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:34:05.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:06.001810 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 18:34:06.005124 kernel: libata version 3.00 loaded. Mar 17 18:34:06.005137 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:34:06.005146 kernel: GPT:9289727 != 19775487 Mar 17 18:34:06.005154 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:34:06.005168 kernel: GPT:9289727 != 19775487 Mar 17 18:34:06.005176 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:34:06.005184 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:34:06.012196 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:34:06.012226 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 18:34:06.023207 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 18:34:06.023220 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 18:34:06.023304 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 18:34:06.023380 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 18:34:06.023407 kernel: AES CTR mode by8 optimization enabled Mar 17 18:34:06.023416 kernel: scsi host0: ahci Mar 17 18:34:06.023512 kernel: scsi host1: ahci Mar 17 18:34:06.023595 kernel: scsi host2: ahci Mar 17 18:34:06.023679 kernel: scsi host3: ahci Mar 17 18:34:06.023772 kernel: scsi host4: ahci Mar 17 18:34:06.023856 kernel: scsi host5: ahci Mar 17 18:34:06.023950 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 17 18:34:06.023960 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 17 18:34:06.023969 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 17 18:34:06.023977 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 17 18:34:06.023986 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 17 18:34:06.023995 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 17 18:34:06.031458 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:34:06.073600 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (442) Mar 17 18:34:06.078136 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:34:06.078421 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:34:06.084021 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:34:06.089726 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:34:06.091527 systemd[1]: Starting disk-uuid.service... Mar 17 18:34:06.216056 disk-uuid[525]: Primary Header is updated. Mar 17 18:34:06.216056 disk-uuid[525]: Secondary Entries is updated. Mar 17 18:34:06.216056 disk-uuid[525]: Secondary Header is updated. Mar 17 18:34:06.220417 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:34:06.224404 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:34:06.330441 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 18:34:06.335681 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 17 18:34:06.335754 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 17 18:34:06.335764 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 18:34:06.337423 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 18:34:06.338429 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 18:34:06.339424 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 17 18:34:06.339450 kernel: ata3.00: applying bridge limits Mar 17 18:34:06.340658 kernel: ata3.00: configured for UDMA/100 Mar 17 18:34:06.341423 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 18:34:06.373931 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 17 18:34:06.391433 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 18:34:06.391452 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 17 18:34:07.225114 disk-uuid[526]: The operation has completed successfully. Mar 17 18:34:07.226548 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:34:07.251450 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:34:07.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:07.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:07.251538 systemd[1]: Finished disk-uuid.service. Mar 17 18:34:07.253158 systemd[1]: Starting verity-setup.service... Mar 17 18:34:07.265407 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 17 18:34:07.283809 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:34:07.287042 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:34:07.289483 systemd[1]: Finished verity-setup.service. Mar 17 18:34:07.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:07.364274 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:34:07.364591 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:34:07.365934 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:34:07.366701 systemd[1]: Starting ignition-setup.service... Mar 17 18:34:07.369171 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:34:07.376875 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:34:07.376898 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:34:07.376908 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:34:07.384755 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:34:07.393089 systemd[1]: Finished ignition-setup.service. Mar 17 18:34:07.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:07.395553 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:34:07.439038 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:34:07.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:07.440000 audit: BPF prog-id=9 op=LOAD Mar 17 18:34:07.441188 systemd[1]: Starting systemd-networkd.service... Mar 17 18:34:07.479803 systemd-networkd[715]: lo: Link UP Mar 17 18:34:07.479812 systemd-networkd[715]: lo: Gained carrier Mar 17 18:34:07.481281 ignition[637]: Ignition 2.14.0 Mar 17 18:34:07.481289 ignition[637]: Stage: fetch-offline Mar 17 18:34:07.481776 systemd-networkd[715]: Enumeration completed Mar 17 18:34:07.481332 ignition[637]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:34:07.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:07.482616 systemd[1]: Started systemd-networkd.service. Mar 17 18:34:07.481340 ignition[637]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:34:07.481460 ignition[637]: parsed url from cmdline: "" Mar 17 18:34:07.481464 ignition[637]: no config URL provided Mar 17 18:34:07.481469 ignition[637]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:34:07.481476 ignition[637]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:34:07.488872 systemd-networkd[715]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:34:07.481493 ignition[637]: op(1): [started] loading QEMU firmware config module Mar 17 18:34:07.489441 systemd[1]: Reached target network.target. Mar 17 18:34:07.481497 ignition[637]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 18:34:07.491180 systemd[1]: Starting iscsiuio.service... Mar 17 18:34:07.485985 ignition[637]: op(1): [finished] loading QEMU firmware config module Mar 17 18:34:07.492826 systemd-networkd[715]: eth0: Link UP Mar 17 18:34:07.492829 systemd-networkd[715]: eth0: Gained carrier Mar 17 18:34:07.532727 ignition[637]: parsing config with SHA512: b6033f5c87e9258dbbe0dad24d71bd0da63aa58ac4f096a8f13f7e1bb8256163386ec02e514bb975db859edb60b31577a1902cb097efc29a7315df1de7feb753 Mar 17 18:34:07.580298 unknown[637]: fetched base config from "system" Mar 17 18:34:07.580724 ignition[637]: fetch-offline: fetch-offline passed Mar 17 18:34:07.580309 unknown[637]: fetched user config from "qemu" Mar 17 18:34:07.580767 ignition[637]: Ignition finished successfully Mar 17 18:34:07.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:07.582626 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:34:07.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:07.584232 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 18:34:07.584955 systemd[1]: Starting ignition-kargs.service... Mar 17 18:34:07.586567 systemd[1]: Started iscsiuio.service. Mar 17 18:34:07.592029 systemd[1]: Starting iscsid.service... Mar 17 18:34:07.618145 iscsid[728]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:34:07.618145 iscsid[728]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:34:07.618145 iscsid[728]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:34:07.618145 iscsid[728]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:34:07.618145 iscsid[728]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:34:07.618145 iscsid[728]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:34:07.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:07.619132 systemd[1]: Started iscsid.service. Mar 17 18:34:07.621192 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:34:07.762573 systemd-networkd[715]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:34:07.781686 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:34:07.783167 ignition[721]: Ignition 2.14.0 Mar 17 18:34:07.783177 ignition[721]: Stage: kargs Mar 17 18:34:07.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:07.783280 ignition[721]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:34:07.783289 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:34:07.784662 ignition[721]: kargs: kargs passed Mar 17 18:34:07.784696 ignition[721]: Ignition finished successfully Mar 17 18:34:07.787301 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:34:07.789474 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:34:07.789900 systemd[1]: Reached target remote-fs.target. Mar 17 18:34:07.792628 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:34:07.793044 systemd[1]: Finished ignition-kargs.service. Mar 17 18:34:07.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:07.795247 systemd[1]: Starting ignition-disks.service... Mar 17 18:34:07.800097 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:34:07.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:07.808897 ignition[738]: Ignition 2.14.0 Mar 17 18:34:07.808907 ignition[738]: Stage: disks Mar 17 18:34:07.808991 ignition[738]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:34:07.809011 ignition[738]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:34:07.809939 ignition[738]: disks: disks passed Mar 17 18:34:07.809974 ignition[738]: Ignition finished successfully Mar 17 18:34:07.814006 systemd[1]: Finished ignition-disks.service. Mar 17 18:34:07.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:07.814875 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:34:07.816488 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:34:07.817318 systemd[1]: Reached target local-fs.target. Mar 17 18:34:07.818091 systemd[1]: Reached target sysinit.target. Mar 17 18:34:07.819695 systemd[1]: Reached target basic.target. Mar 17 18:34:07.820737 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:34:07.832850 systemd-fsck[750]: ROOT: clean, 623/553520 files, 56022/553472 blocks Mar 17 18:34:07.839766 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:34:07.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:07.841050 systemd[1]: Mounting sysroot.mount... Mar 17 18:34:07.847413 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:34:07.847445 systemd[1]: Mounted sysroot.mount. Mar 17 18:34:07.847732 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:34:07.849874 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:34:07.850981 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 18:34:07.851017 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:34:07.851037 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:34:07.852609 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:34:07.855992 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:34:07.863138 initrd-setup-root[760]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:34:07.866156 initrd-setup-root[768]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:34:07.870124 initrd-setup-root[776]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:34:07.873926 initrd-setup-root[784]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:34:07.901596 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:34:07.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:07.903296 systemd[1]: Starting ignition-mount.service... Mar 17 18:34:07.904617 systemd[1]: Starting sysroot-boot.service... Mar 17 18:34:07.908421 bash[801]: umount: /sysroot/usr/share/oem: not mounted. Mar 17 18:34:07.922897 ignition[802]: INFO : Ignition 2.14.0 Mar 17 18:34:07.922897 ignition[802]: INFO : Stage: mount Mar 17 18:34:07.924762 ignition[802]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:34:07.924762 ignition[802]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:34:07.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:07.925049 systemd[1]: Finished sysroot-boot.service. Mar 17 18:34:07.929253 ignition[802]: INFO : mount: mount passed Mar 17 18:34:07.929253 ignition[802]: INFO : Ignition finished successfully Mar 17 18:34:07.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:34:07.928141 systemd[1]: Finished ignition-mount.service. Mar 17 18:34:08.297878 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:34:08.303404 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) Mar 17 18:34:08.303426 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:34:08.306217 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:34:08.306229 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:34:08.310018 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:34:08.311426 systemd[1]: Starting ignition-files.service... Mar 17 18:34:08.324882 ignition[832]: INFO : Ignition 2.14.0 Mar 17 18:34:08.324882 ignition[832]: INFO : Stage: files Mar 17 18:34:08.326548 ignition[832]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:34:08.326548 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:34:08.328964 ignition[832]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:34:08.328964 ignition[832]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:34:08.328964 ignition[832]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:34:08.332835 ignition[832]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:34:08.334291 ignition[832]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:34:08.334291 ignition[832]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:34:08.333744 unknown[832]: wrote ssh authorized keys file for user: core Mar 17 18:34:08.338360 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:34:08.338360 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 18:34:08.379840 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 18:34:08.560193 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:34:08.562361 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:34:08.562361 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 18:34:08.956625 systemd-networkd[715]: eth0: Gained IPv6LL Mar 17 18:34:12.880963 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: Internal Server Error Mar 17 18:34:13.081454 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #2 Mar 17 18:34:13.153190 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: Internal Server Error Mar 17 18:34:13.553805 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #3 Mar 17 18:34:13.632608 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: Internal Server Error Mar 17 18:34:14.433700 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #4 Mar 17 18:34:14.514311 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: Internal Server Error Mar 17 18:34:16.114522 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #5 Mar 17 18:34:16.188015 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: Internal Server Error Mar 17 18:34:19.388861 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #6 Mar 17 18:34:19.462557 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: Internal Server Error Mar 17 18:34:24.466764 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #7 Mar 17 18:34:24.545340 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: Internal Server Error Mar 17 18:34:29.546066 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #8 Mar 17 18:34:29.615123 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: Internal Server Error Mar 17 18:34:34.615352 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #9 Mar 17 18:34:34.686123 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: Internal Server Error Mar 17 18:34:39.687521 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #10 Mar 17 18:34:39.764611 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: Internal Server Error Mar 17 18:34:44.768827 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #11 Mar 17 18:34:44.844632 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: Internal Server Error Mar 17 18:34:49.845424 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #12 Mar 17 18:34:49.918272 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: Internal Server Error Mar 17 18:34:54.922511 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #13 Mar 17 18:34:55.009202 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: Internal Server Error Mar 17 18:35:00.010470 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #14 Mar 17 18:35:00.096911 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: Internal Server Error Mar 17 18:35:05.101129 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #15 Mar 17 18:35:05.193187 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: Internal Server Error Mar 17 18:35:10.194062 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #16 Mar 17 18:35:10.269296 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: Internal Server Error Mar 17 18:35:15.269601 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #17 Mar 17 18:35:15.349716 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: Internal Server Error Mar 17 18:35:20.351406 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #18 Mar 17 18:35:20.728741 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:35:21.082443 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:35:21.082443 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:35:21.087735 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:35:21.087735 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:35:21.087735 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:35:21.087735 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:35:21.087735 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:35:21.087735 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:35:21.087735 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:35:21.087735 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:35:21.087735 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:35:21.087735 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 18:35:21.087735 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 18:35:21.087735 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 18:35:21.087735 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Mar 17 18:35:21.389524 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 18:35:22.068897 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 18:35:22.068897 ignition[832]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 18:35:22.072693 ignition[832]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:35:22.074880 ignition[832]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:35:22.074880 ignition[832]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 18:35:22.078295 ignition[832]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 17 18:35:22.078295 ignition[832]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:35:22.081755 ignition[832]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:35:22.081755 ignition[832]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 17 18:35:22.081755 ignition[832]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:35:22.086772 ignition[832]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:35:22.086772 ignition[832]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 18:35:22.086772 ignition[832]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:35:22.108379 ignition[832]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:35:22.110131 ignition[832]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 18:35:22.110131 ignition[832]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:35:22.110131 ignition[832]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:35:22.110131 ignition[832]: INFO : files: files passed Mar 17 18:35:22.110131 ignition[832]: INFO : Ignition finished successfully Mar 17 18:35:22.117024 systemd[1]: Finished ignition-files.service. Mar 17 18:35:22.123218 kernel: kauditd_printk_skb: 24 callbacks suppressed Mar 17 18:35:22.123256 kernel: audit: type=1130 audit(1742236522.117:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.118503 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:35:22.123617 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:35:22.124167 systemd[1]: Starting ignition-quench.service... Mar 17 18:35:22.127749 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:35:22.127875 systemd[1]: Finished ignition-quench.service. Mar 17 18:35:22.136432 kernel: audit: type=1130 audit(1742236522.129:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.136447 kernel: audit: type=1131 audit(1742236522.129:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.137995 initrd-setup-root-after-ignition[858]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Mar 17 18:35:22.140578 initrd-setup-root-after-ignition[860]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:35:22.142608 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:35:22.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.144611 systemd[1]: Reached target ignition-complete.target. Mar 17 18:35:22.149218 kernel: audit: type=1130 audit(1742236522.144:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.149008 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:35:22.161150 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:35:22.161270 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:35:22.170046 kernel: audit: type=1130 audit(1742236522.161:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.170065 kernel: audit: type=1131 audit(1742236522.161:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.161911 systemd[1]: Reached target initrd-fs.target. Mar 17 18:35:22.171745 systemd[1]: Reached target initrd.target. Mar 17 18:35:22.173235 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:35:22.174652 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:35:22.184989 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:35:22.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.187669 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:35:22.190407 kernel: audit: type=1130 audit(1742236522.186:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.198056 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:35:22.199743 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:35:22.200305 systemd[1]: Stopped target timers.target. Mar 17 18:35:22.202915 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:35:22.203079 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:35:22.208077 kernel: audit: type=1131 audit(1742236522.203:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.203910 systemd[1]: Stopped target initrd.target. Mar 17 18:35:22.209788 systemd[1]: Stopped target basic.target. Mar 17 18:35:22.211373 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:35:22.213185 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:35:22.213837 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:35:22.215594 systemd[1]: Stopped target remote-fs.target. Mar 17 18:35:22.217263 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:35:22.218751 systemd[1]: Stopped target sysinit.target. Mar 17 18:35:22.220311 systemd[1]: Stopped target local-fs.target. Mar 17 18:35:22.221720 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:35:22.223134 systemd[1]: Stopped target swap.target. Mar 17 18:35:22.224656 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:35:22.230080 kernel: audit: type=1131 audit(1742236522.225:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.224806 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:35:22.226215 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:35:22.235912 kernel: audit: type=1131 audit(1742236522.231:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.230757 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:35:22.230936 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:35:22.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.232097 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:35:22.232234 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:35:22.238030 systemd[1]: Stopped target paths.target. Mar 17 18:35:22.238336 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:35:22.242759 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:35:22.244681 systemd[1]: Stopped target slices.target. Mar 17 18:35:22.246147 systemd[1]: Stopped target sockets.target. Mar 17 18:35:22.247654 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:35:22.248878 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:35:22.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.250806 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:35:22.251737 systemd[1]: Stopped ignition-files.service. Mar 17 18:35:22.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.254026 systemd[1]: Stopping ignition-mount.service... Mar 17 18:35:22.255555 systemd[1]: Stopping iscsid.service... Mar 17 18:35:22.256822 iscsid[728]: iscsid shutting down. Mar 17 18:35:22.257933 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:35:22.259361 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:35:22.260482 ignition[873]: INFO : Ignition 2.14.0 Mar 17 18:35:22.260482 ignition[873]: INFO : Stage: umount Mar 17 18:35:22.260482 ignition[873]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:35:22.260482 ignition[873]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:35:22.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.260478 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:35:22.265681 ignition[873]: INFO : umount: umount passed Mar 17 18:35:22.265681 ignition[873]: INFO : Ignition finished successfully Mar 17 18:35:22.264832 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:35:22.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.265660 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:35:22.271133 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:35:22.271981 systemd[1]: Stopped iscsid.service. Mar 17 18:35:22.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.273660 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:35:22.274584 systemd[1]: Stopped ignition-mount.service. Mar 17 18:35:22.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.277396 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:35:22.278714 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:35:22.279563 systemd[1]: Closed iscsid.socket. Mar 17 18:35:22.280938 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:35:22.280971 systemd[1]: Stopped ignition-disks.service. Mar 17 18:35:22.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.283357 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:35:22.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.283402 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:35:22.285059 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:35:22.285089 systemd[1]: Stopped ignition-setup.service. Mar 17 18:35:22.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.288172 systemd[1]: Stopping iscsiuio.service... Mar 17 18:35:22.289677 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:35:22.290615 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:35:22.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.292265 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:35:22.293166 systemd[1]: Stopped iscsiuio.service. Mar 17 18:35:22.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.294698 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:35:22.295588 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:35:22.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.297638 systemd[1]: Stopped target network.target. Mar 17 18:35:22.299150 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:35:22.299177 systemd[1]: Closed iscsiuio.socket. Mar 17 18:35:22.301262 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:35:22.301296 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:35:22.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.303827 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:35:22.305421 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:35:22.310460 systemd-networkd[715]: eth0: DHCPv6 lease lost Mar 17 18:35:22.311599 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:35:22.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.311675 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:35:22.314752 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:35:22.314785 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:35:22.317000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:35:22.318206 systemd[1]: Stopping network-cleanup.service... Mar 17 18:35:22.319803 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:35:22.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.319843 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:35:22.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.321815 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:35:22.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.321853 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:35:22.323665 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:35:22.323695 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:35:22.325685 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:35:22.330601 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:35:22.332196 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:35:22.332270 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:35:22.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.335425 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:35:22.336437 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:35:22.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.338772 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:35:22.339000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:35:22.338812 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:35:22.340787 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:35:22.340813 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:35:22.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.342461 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:35:22.342491 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:35:22.344329 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:35:22.344359 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:35:22.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.348855 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:35:22.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.348888 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:35:22.352007 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:35:22.353732 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 18:35:22.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.353770 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Mar 17 18:35:22.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.355848 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:35:22.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.355884 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:35:22.357685 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:35:22.357716 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:35:22.362799 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 18:35:22.364525 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:35:22.364595 systemd[1]: Stopped network-cleanup.service. Mar 17 18:35:22.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.367152 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:35:22.368233 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:35:22.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:22.370071 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:35:22.372285 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:35:22.387621 systemd[1]: Switching root. Mar 17 18:35:22.408233 systemd-journald[198]: Journal stopped Mar 17 18:35:29.083666 systemd-journald[198]: Received SIGTERM from PID 1 (n/a). Mar 17 18:35:29.083730 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:35:29.083750 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:35:29.083764 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:35:29.083777 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:35:29.083794 kernel: SELinux: policy capability open_perms=1 Mar 17 18:35:29.083807 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:35:29.083823 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:35:29.083836 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:35:29.083853 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:35:29.083866 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:35:29.083879 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:35:29.083893 systemd[1]: Successfully loaded SELinux policy in 51.379ms. Mar 17 18:35:29.083915 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.401ms. Mar 17 18:35:29.083944 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:35:29.083960 systemd[1]: Detected virtualization kvm. Mar 17 18:35:29.083977 systemd[1]: Detected architecture x86-64. Mar 17 18:35:29.083991 systemd[1]: Detected first boot. Mar 17 18:35:29.084005 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:35:29.084019 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:35:29.084041 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:35:29.084061 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:35:29.084077 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:35:29.084098 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:35:29.084115 kernel: kauditd_printk_skb: 48 callbacks suppressed Mar 17 18:35:29.084128 kernel: audit: type=1334 audit(1742236528.689:86): prog-id=12 op=LOAD Mar 17 18:35:29.084141 kernel: audit: type=1334 audit(1742236528.690:87): prog-id=3 op=UNLOAD Mar 17 18:35:29.084155 kernel: audit: type=1334 audit(1742236528.692:88): prog-id=13 op=LOAD Mar 17 18:35:29.084169 kernel: audit: type=1334 audit(1742236528.694:89): prog-id=14 op=LOAD Mar 17 18:35:29.084182 kernel: audit: type=1334 audit(1742236528.694:90): prog-id=4 op=UNLOAD Mar 17 18:35:29.084195 kernel: audit: type=1334 audit(1742236528.694:91): prog-id=5 op=UNLOAD Mar 17 18:35:29.084212 kernel: audit: type=1334 audit(1742236528.698:92): prog-id=15 op=LOAD Mar 17 18:35:29.084225 kernel: audit: type=1334 audit(1742236528.698:93): prog-id=12 op=UNLOAD Mar 17 18:35:29.084238 kernel: audit: type=1334 audit(1742236528.711:94): prog-id=16 op=LOAD Mar 17 18:35:29.084251 kernel: audit: type=1334 audit(1742236528.713:95): prog-id=17 op=LOAD Mar 17 18:35:29.084266 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 18:35:29.084282 systemd[1]: Stopped initrd-switch-root.service. Mar 17 18:35:29.084297 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 18:35:29.084311 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:35:29.084326 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:35:29.084343 systemd[1]: Created slice system-getty.slice. Mar 17 18:35:29.084366 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:35:29.084382 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:35:29.084418 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:35:29.084433 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:35:29.084448 systemd[1]: Created slice user.slice. Mar 17 18:35:29.084462 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:35:29.084476 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:35:29.084494 systemd[1]: Set up automount boot.automount. Mar 17 18:35:29.084511 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:35:29.084529 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 18:35:29.084547 systemd[1]: Stopped target initrd-fs.target. Mar 17 18:35:29.084564 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 18:35:29.084581 systemd[1]: Reached target integritysetup.target. Mar 17 18:35:29.084599 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:35:29.084617 systemd[1]: Reached target remote-fs.target. Mar 17 18:35:29.084635 systemd[1]: Reached target slices.target. Mar 17 18:35:29.084656 systemd[1]: Reached target swap.target. Mar 17 18:35:29.084674 systemd[1]: Reached target torcx.target. Mar 17 18:35:29.084692 systemd[1]: Reached target veritysetup.target. Mar 17 18:35:29.084710 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:35:29.084726 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:35:29.084744 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:35:29.084758 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:35:29.084774 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:35:29.084789 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:35:29.084804 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:35:29.084821 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:35:29.084835 systemd[1]: Mounting media.mount... Mar 17 18:35:29.084850 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:35:29.084865 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:35:29.084880 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:35:29.084896 systemd[1]: Mounting tmp.mount... Mar 17 18:35:29.084910 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:35:29.084924 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:35:29.084957 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:35:29.084975 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:35:29.084989 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:35:29.085003 systemd[1]: Starting modprobe@drm.service... Mar 17 18:35:29.085018 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:35:29.085032 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:35:29.085046 systemd[1]: Starting modprobe@loop.service... Mar 17 18:35:29.085060 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:35:29.085082 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 18:35:29.085098 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 18:35:29.085112 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 18:35:29.085126 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 18:35:29.085139 kernel: fuse: init (API version 7.34) Mar 17 18:35:29.085153 systemd[1]: Stopped systemd-journald.service. Mar 17 18:35:29.085167 systemd[1]: Starting systemd-journald.service... Mar 17 18:35:29.085181 kernel: loop: module loaded Mar 17 18:35:29.085195 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:35:29.085210 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:35:29.085224 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:35:29.085240 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:35:29.085255 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 18:35:29.085268 systemd[1]: Stopped verity-setup.service. Mar 17 18:35:29.085282 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:35:29.085306 systemd-journald[979]: Journal started Mar 17 18:35:29.085366 systemd-journald[979]: Runtime Journal (/run/log/journal/e086294918f94f8a8f622bab8f8886b8) is 6.0M, max 48.5M, 42.5M free. Mar 17 18:35:22.479000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:35:22.522000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:35:22.523000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:35:22.523000 audit: BPF prog-id=10 op=LOAD Mar 17 18:35:22.523000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:35:22.523000 audit: BPF prog-id=11 op=LOAD Mar 17 18:35:22.523000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:35:22.554000 audit[906]: AVC avc: denied { associate } for pid=906 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:35:22.554000 audit[906]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178dc a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=889 pid=906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:35:22.554000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:35:22.556000 audit[906]: AVC avc: denied { associate } for pid=906 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:35:22.556000 audit[906]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179b5 a2=1ed a3=0 items=2 ppid=889 pid=906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:35:22.556000 audit: CWD cwd="/" Mar 17 18:35:22.556000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:22.556000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:22.556000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:35:28.689000 audit: BPF prog-id=12 op=LOAD Mar 17 18:35:28.690000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:35:28.692000 audit: BPF prog-id=13 op=LOAD Mar 17 18:35:28.694000 audit: BPF prog-id=14 op=LOAD Mar 17 18:35:28.694000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:35:28.694000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:35:28.698000 audit: BPF prog-id=15 op=LOAD Mar 17 18:35:28.698000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:35:28.711000 audit: BPF prog-id=16 op=LOAD Mar 17 18:35:28.713000 audit: BPF prog-id=17 op=LOAD Mar 17 18:35:28.713000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:35:28.713000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:35:28.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:28.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:28.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:28.762000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:35:29.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.032000 audit: BPF prog-id=18 op=LOAD Mar 17 18:35:29.033000 audit: BPF prog-id=19 op=LOAD Mar 17 18:35:29.034000 audit: BPF prog-id=20 op=LOAD Mar 17 18:35:29.034000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:35:29.034000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:35:29.081000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:35:29.081000 audit[979]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc3791cf20 a2=4000 a3=7ffc3791cfbc items=0 ppid=1 pid=979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:35:29.081000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:35:29.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:28.685405 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:35:22.553179 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:35:28.685423 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 17 18:35:22.553470 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:22Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:35:28.714332 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 18:35:22.553555 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:22Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:35:22.553589 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:22Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 18:35:22.553600 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:22Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 18:35:22.553632 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:22Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 18:35:22.553645 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:22Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 18:35:22.554014 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:22Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 18:35:22.554063 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:22Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:35:22.554076 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:22Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:35:22.554502 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:22Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 18:35:22.554541 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:22Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 18:35:22.554562 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:22Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 18:35:22.554579 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:22Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 18:35:22.554596 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:22Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 18:35:22.554612 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:22Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 18:35:27.993102 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:27Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:35:27.993571 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:27Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:35:27.993770 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:27Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:35:27.994036 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:27Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:35:27.994109 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:27Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 18:35:27.994208 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-03-17T18:35:27Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 18:35:29.091215 systemd[1]: Started systemd-journald.service. Mar 17 18:35:29.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.091460 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:35:29.092413 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:35:29.093329 systemd[1]: Mounted media.mount. Mar 17 18:35:29.094426 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:35:29.095592 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:35:29.096820 systemd[1]: Mounted tmp.mount. Mar 17 18:35:29.098146 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:35:29.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.099881 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:35:29.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.101574 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:35:29.101759 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:35:29.103232 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:35:29.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.103608 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:35:29.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.105013 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:35:29.105207 systemd[1]: Finished modprobe@drm.service. Mar 17 18:35:29.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.106763 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:35:29.107989 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:35:29.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.124627 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:35:29.124989 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:35:29.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.126228 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:35:29.126425 systemd[1]: Finished modprobe@loop.service. Mar 17 18:35:29.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.127804 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:35:29.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.129427 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:35:29.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.130830 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:35:29.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.132599 systemd[1]: Reached target network-pre.target. Mar 17 18:35:29.135083 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:35:29.137493 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:35:29.138667 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:35:29.142675 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:35:29.147008 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:35:29.148073 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:35:29.149302 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:35:29.150474 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:35:29.151659 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:35:29.153990 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:35:29.155718 systemd-journald[979]: Time spent on flushing to /var/log/journal/e086294918f94f8a8f622bab8f8886b8 is 75.668ms for 1151 entries. Mar 17 18:35:29.155718 systemd-journald[979]: System Journal (/var/log/journal/e086294918f94f8a8f622bab8f8886b8) is 8.0M, max 195.6M, 187.6M free. Mar 17 18:35:29.256603 systemd-journald[979]: Received client request to flush runtime journal. Mar 17 18:35:29.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.161332 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:35:29.162881 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:35:29.174135 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:35:29.242373 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:35:29.244009 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:35:29.245646 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:35:29.249031 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:35:29.257824 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:35:29.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.259439 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:35:29.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.262223 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:35:29.270325 udevadm[1010]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 18:35:29.305994 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:35:29.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.988775 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:35:29.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:29.990000 audit: BPF prog-id=21 op=LOAD Mar 17 18:35:29.990000 audit: BPF prog-id=22 op=LOAD Mar 17 18:35:29.990000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:35:29.990000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:35:29.991983 systemd[1]: Starting systemd-udevd.service... Mar 17 18:35:30.014230 systemd-udevd[1014]: Using default interface naming scheme 'v252'. Mar 17 18:35:30.029383 systemd[1]: Started systemd-udevd.service. Mar 17 18:35:30.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:30.031000 audit: BPF prog-id=23 op=LOAD Mar 17 18:35:30.032594 systemd[1]: Starting systemd-networkd.service... Mar 17 18:35:30.040224 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:35:30.038000 audit: BPF prog-id=24 op=LOAD Mar 17 18:35:30.039000 audit: BPF prog-id=25 op=LOAD Mar 17 18:35:30.039000 audit: BPF prog-id=26 op=LOAD Mar 17 18:35:30.067608 systemd[1]: Started systemd-userdbd.service. Mar 17 18:35:30.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:30.070407 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Mar 17 18:35:30.089237 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:35:30.123481 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 18:35:30.124112 systemd-networkd[1021]: lo: Link UP Mar 17 18:35:30.124323 systemd-networkd[1021]: lo: Gained carrier Mar 17 18:35:30.124873 systemd-networkd[1021]: Enumeration completed Mar 17 18:35:30.125046 systemd[1]: Started systemd-networkd.service. Mar 17 18:35:30.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:30.125060 systemd-networkd[1021]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:35:30.127366 systemd-networkd[1021]: eth0: Link UP Mar 17 18:35:30.127377 systemd-networkd[1021]: eth0: Gained carrier Mar 17 18:35:30.128434 kernel: ACPI: button: Power Button [PWRF] Mar 17 18:35:30.138539 systemd-networkd[1021]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:35:30.140000 audit[1027]: AVC avc: denied { confidentiality } for pid=1027 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 18:35:30.140000 audit[1027]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55f671356980 a1=338ac a2=7fdbcfab4bc5 a3=5 items=110 ppid=1014 pid=1027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:35:30.140000 audit: CWD cwd="/" Mar 17 18:35:30.140000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=1 name=(null) inode=15458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=2 name=(null) inode=15458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=3 name=(null) inode=15459 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=4 name=(null) inode=15458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=5 name=(null) inode=15460 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=6 name=(null) inode=15458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=7 name=(null) inode=15461 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=8 name=(null) inode=15461 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=9 name=(null) inode=15462 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=10 name=(null) inode=15461 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=11 name=(null) inode=15463 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=12 name=(null) inode=15461 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=13 name=(null) inode=15464 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=14 name=(null) inode=15461 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=15 name=(null) inode=15465 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=16 name=(null) inode=15461 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=17 name=(null) inode=15466 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=18 name=(null) inode=15458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=19 name=(null) inode=15467 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=20 name=(null) inode=15467 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=21 name=(null) inode=15468 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=22 name=(null) inode=15467 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=23 name=(null) inode=15469 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=24 name=(null) inode=15467 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=25 name=(null) inode=15470 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=26 name=(null) inode=15467 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=27 name=(null) inode=15471 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=28 name=(null) inode=15467 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=29 name=(null) inode=15472 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=30 name=(null) inode=15458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=31 name=(null) inode=15473 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=32 name=(null) inode=15473 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=33 name=(null) inode=15474 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=34 name=(null) inode=15473 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=35 name=(null) inode=15475 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=36 name=(null) inode=15473 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=37 name=(null) inode=15476 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=38 name=(null) inode=15473 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=39 name=(null) inode=15477 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=40 name=(null) inode=15473 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=41 name=(null) inode=15478 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=42 name=(null) inode=15458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=43 name=(null) inode=15479 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=44 name=(null) inode=15479 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=45 name=(null) inode=15480 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=46 name=(null) inode=15479 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=47 name=(null) inode=15481 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=48 name=(null) inode=15479 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=49 name=(null) inode=15482 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=50 name=(null) inode=15479 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=51 name=(null) inode=15483 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=52 name=(null) inode=15479 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=53 name=(null) inode=15484 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=55 name=(null) inode=15485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=56 name=(null) inode=15485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=57 name=(null) inode=15486 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=58 name=(null) inode=15485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=59 name=(null) inode=15487 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=60 name=(null) inode=15485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=61 name=(null) inode=15488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=62 name=(null) inode=15488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=63 name=(null) inode=15489 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=64 name=(null) inode=15488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=65 name=(null) inode=15490 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=66 name=(null) inode=15488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=67 name=(null) inode=15491 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=68 name=(null) inode=15488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=69 name=(null) inode=15492 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=70 name=(null) inode=15488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=71 name=(null) inode=15493 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=72 name=(null) inode=15485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=73 name=(null) inode=15494 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=74 name=(null) inode=15494 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=75 name=(null) inode=15495 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=76 name=(null) inode=15494 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=77 name=(null) inode=15496 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=78 name=(null) inode=15494 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=79 name=(null) inode=15497 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=80 name=(null) inode=15494 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=81 name=(null) inode=15498 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=82 name=(null) inode=15494 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=83 name=(null) inode=15499 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=84 name=(null) inode=15485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=85 name=(null) inode=15500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=86 name=(null) inode=15500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=87 name=(null) inode=15501 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=88 name=(null) inode=15500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=89 name=(null) inode=15502 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=90 name=(null) inode=15500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=91 name=(null) inode=15503 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=92 name=(null) inode=15500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=93 name=(null) inode=15504 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=94 name=(null) inode=15500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=95 name=(null) inode=15505 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=96 name=(null) inode=15485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=97 name=(null) inode=15506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=98 name=(null) inode=15506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=99 name=(null) inode=15507 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=100 name=(null) inode=15506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=101 name=(null) inode=15508 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=102 name=(null) inode=15506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=103 name=(null) inode=15509 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=104 name=(null) inode=15506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=105 name=(null) inode=15510 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=106 name=(null) inode=15506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=107 name=(null) inode=15511 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PATH item=109 name=(null) inode=15512 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:30.140000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 18:35:30.164417 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 18:35:30.173595 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 18:35:30.176086 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 18:35:30.176278 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 18:35:30.176447 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:35:30.236858 kernel: kvm: Nested Virtualization enabled Mar 17 18:35:30.237027 kernel: SVM: kvm: Nested Paging enabled Mar 17 18:35:30.237054 kernel: SVM: Virtual VMLOAD VMSAVE supported Mar 17 18:35:30.237075 kernel: SVM: Virtual GIF supported Mar 17 18:35:30.253435 kernel: EDAC MC: Ver: 3.0.0 Mar 17 18:35:30.281005 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:35:30.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:30.283636 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:35:30.293967 lvm[1049]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:35:30.326669 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:35:30.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:30.327811 systemd[1]: Reached target cryptsetup.target. Mar 17 18:35:30.329786 systemd[1]: Starting lvm2-activation.service... Mar 17 18:35:30.334777 lvm[1050]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:35:30.364760 systemd[1]: Finished lvm2-activation.service. Mar 17 18:35:30.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:30.366066 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:35:30.367173 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:35:30.367205 systemd[1]: Reached target local-fs.target. Mar 17 18:35:30.368239 systemd[1]: Reached target machines.target. Mar 17 18:35:30.370578 systemd[1]: Starting ldconfig.service... Mar 17 18:35:30.371742 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:35:30.371807 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:35:30.372724 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:35:30.374346 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:35:30.376725 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:35:30.378696 systemd[1]: Starting systemd-sysext.service... Mar 17 18:35:30.379907 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1052 (bootctl) Mar 17 18:35:30.380841 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:35:30.388025 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:35:30.392854 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:35:30.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:30.393429 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:35:30.393599 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:35:30.403420 kernel: loop0: detected capacity change from 0 to 205544 Mar 17 18:35:30.427725 systemd-fsck[1061]: fsck.fat 4.2 (2021-01-31) Mar 17 18:35:30.427725 systemd-fsck[1061]: /dev/vda1: 789 files, 119299/258078 clusters Mar 17 18:35:30.429803 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:35:30.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:30.432426 systemd[1]: Mounting boot.mount... Mar 17 18:35:30.448653 systemd[1]: Mounted boot.mount. Mar 17 18:35:30.463598 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:35:30.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:30.496719 ldconfig[1051]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:35:31.168429 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:35:31.200071 systemd-networkd[1021]: eth0: Gained IPv6LL Mar 17 18:35:31.215175 systemd[1]: Finished ldconfig.service. Mar 17 18:35:31.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:31.246945 kernel: loop1: detected capacity change from 0 to 205544 Mar 17 18:35:31.251970 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:35:31.253510 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:35:31.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:31.264034 (sd-sysext)[1065]: Using extensions 'kubernetes'. Mar 17 18:35:31.264584 (sd-sysext)[1065]: Merged extensions into '/usr'. Mar 17 18:35:31.287463 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:35:31.289660 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:35:31.290972 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:35:31.293102 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:35:31.296001 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:35:31.298861 systemd[1]: Starting modprobe@loop.service... Mar 17 18:35:31.302772 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:35:31.302990 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:35:31.303154 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:35:31.306836 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:35:31.308698 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:35:31.308908 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:35:31.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:31.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:31.310667 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:35:31.310832 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:35:31.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:31.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:31.312587 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:35:31.312759 systemd[1]: Finished modprobe@loop.service. Mar 17 18:35:31.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:31.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:31.314595 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:35:31.314726 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:35:31.315956 systemd[1]: Finished systemd-sysext.service. Mar 17 18:35:31.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:31.319271 systemd[1]: Starting ensure-sysext.service... Mar 17 18:35:31.321985 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:35:31.326846 systemd[1]: Reloading. Mar 17 18:35:31.338147 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:35:31.339547 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:35:31.342918 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:35:31.385851 /usr/lib/systemd/system-generators/torcx-generator[1091]: time="2025-03-17T18:35:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:35:31.385909 /usr/lib/systemd/system-generators/torcx-generator[1091]: time="2025-03-17T18:35:31Z" level=info msg="torcx already run" Mar 17 18:35:31.489888 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:35:31.489911 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:35:31.515430 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:35:31.590000 audit: BPF prog-id=27 op=LOAD Mar 17 18:35:31.591000 audit: BPF prog-id=28 op=LOAD Mar 17 18:35:31.591000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:35:31.591000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:35:31.592000 audit: BPF prog-id=29 op=LOAD Mar 17 18:35:31.592000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:35:31.592000 audit: BPF prog-id=30 op=LOAD Mar 17 18:35:31.592000 audit: BPF prog-id=31 op=LOAD Mar 17 18:35:31.592000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:35:31.592000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:35:31.593000 audit: BPF prog-id=32 op=LOAD Mar 17 18:35:31.593000 audit: BPF prog-id=24 op=UNLOAD Mar 17 18:35:31.594000 audit: BPF prog-id=33 op=LOAD Mar 17 18:35:31.594000 audit: BPF prog-id=34 op=LOAD Mar 17 18:35:31.594000 audit: BPF prog-id=25 op=UNLOAD Mar 17 18:35:31.594000 audit: BPF prog-id=26 op=UNLOAD Mar 17 18:35:31.597000 audit: BPF prog-id=35 op=LOAD Mar 17 18:35:31.597000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:35:31.603373 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:35:31.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:31.609814 systemd[1]: Starting audit-rules.service... Mar 17 18:35:31.612458 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:35:31.615371 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:35:31.618000 audit: BPF prog-id=36 op=LOAD Mar 17 18:35:31.619723 systemd[1]: Starting systemd-resolved.service... Mar 17 18:35:31.621000 audit: BPF prog-id=37 op=LOAD Mar 17 18:35:31.623194 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:35:31.625687 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:35:31.627431 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:35:31.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:31.631693 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:35:31.634149 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:35:31.634597 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:35:31.636549 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:35:31.636000 audit[1145]: SYSTEM_BOOT pid=1145 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:35:31.639459 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:35:31.643507 systemd[1]: Starting modprobe@loop.service... Mar 17 18:35:31.644738 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:35:31.644913 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:35:31.645066 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:35:31.645170 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:35:31.646626 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:35:31.646835 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:35:31.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:31.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:31.648757 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:35:31.648940 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:35:31.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:31.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:31.650722 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:35:31.650888 systemd[1]: Finished modprobe@loop.service. Mar 17 18:35:31.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:31.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:31.658153 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:35:31.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:31.660816 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:35:31.661146 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:35:31.664096 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:35:31.666934 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:35:31.669620 systemd[1]: Starting modprobe@loop.service... Mar 17 18:35:31.670755 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:35:31.670919 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:35:31.672670 systemd[1]: Starting systemd-update-done.service... Mar 17 18:35:31.673875 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:35:31.674015 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:35:31.677000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:35:31.677000 audit[1161]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe8be56140 a2=420 a3=0 items=0 ppid=1134 pid=1161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:35:31.677000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:35:31.678586 augenrules[1161]: No rules Mar 17 18:35:31.679626 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:35:31.681588 systemd[1]: Finished audit-rules.service. Mar 17 18:35:31.683186 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:35:31.683353 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:35:31.685398 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:35:31.685612 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:35:31.687381 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:35:31.687582 systemd[1]: Finished modprobe@loop.service. Mar 17 18:35:31.689285 systemd[1]: Finished systemd-update-done.service. Mar 17 18:35:31.693972 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:35:31.694260 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:35:31.696019 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:35:31.699685 systemd[1]: Starting modprobe@drm.service... Mar 17 18:35:31.701959 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:35:31.704483 systemd[1]: Starting modprobe@loop.service... Mar 17 18:35:31.705499 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:35:31.705611 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:35:31.707187 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:35:31.709735 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:35:31.709976 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:35:31.711648 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:35:31.711849 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:35:31.713685 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:35:31.713859 systemd[1]: Finished modprobe@drm.service. Mar 17 18:35:31.715560 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:35:31.715755 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:35:31.717731 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:35:31.717900 systemd[1]: Finished modprobe@loop.service. Mar 17 18:35:31.718734 systemd-timesyncd[1141]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 18:35:31.718792 systemd-timesyncd[1141]: Initial clock synchronization to Mon 2025-03-17 18:35:32.086154 UTC. Mar 17 18:35:31.719661 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:35:31.722539 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:35:31.724767 systemd[1]: Reached target time-set.target. Mar 17 18:35:31.726004 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:35:31.726057 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:35:31.726685 systemd[1]: Finished ensure-sysext.service. Mar 17 18:35:31.732716 systemd-resolved[1138]: Positive Trust Anchors: Mar 17 18:35:31.732741 systemd-resolved[1138]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:35:31.732782 systemd-resolved[1138]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:35:31.744096 systemd-resolved[1138]: Defaulting to hostname 'linux'. Mar 17 18:35:31.746208 systemd[1]: Started systemd-resolved.service. Mar 17 18:35:31.747543 systemd[1]: Reached target network.target. Mar 17 18:35:31.748537 systemd[1]: Reached target network-online.target. Mar 17 18:35:31.749683 systemd[1]: Reached target nss-lookup.target. Mar 17 18:35:31.750757 systemd[1]: Reached target sysinit.target. Mar 17 18:35:31.752046 systemd[1]: Started motdgen.path. Mar 17 18:35:31.753019 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:35:31.754645 systemd[1]: Started logrotate.timer. Mar 17 18:35:31.755763 systemd[1]: Started mdadm.timer. Mar 17 18:35:31.756852 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:35:31.757972 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:35:31.758008 systemd[1]: Reached target paths.target. Mar 17 18:35:31.759050 systemd[1]: Reached target timers.target. Mar 17 18:35:31.763182 systemd[1]: Listening on dbus.socket. Mar 17 18:35:31.765803 systemd[1]: Starting docker.socket... Mar 17 18:35:31.772471 systemd[1]: Listening on sshd.socket. Mar 17 18:35:31.774027 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:35:31.774682 systemd[1]: Listening on docker.socket. Mar 17 18:35:31.775795 systemd[1]: Reached target sockets.target. Mar 17 18:35:31.776924 systemd[1]: Reached target basic.target. Mar 17 18:35:31.777997 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:35:31.778039 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:35:31.779660 systemd[1]: Starting containerd.service... Mar 17 18:35:31.781992 systemd[1]: Starting dbus.service... Mar 17 18:35:31.784324 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:35:31.786768 systemd[1]: Starting extend-filesystems.service... Mar 17 18:35:31.788146 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:35:31.789979 systemd[1]: Starting kubelet.service... Mar 17 18:35:31.791665 jq[1176]: false Mar 17 18:35:31.792425 systemd[1]: Starting motdgen.service... Mar 17 18:35:31.794489 systemd[1]: Starting prepare-helm.service... Mar 17 18:35:31.797085 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:35:31.799446 systemd[1]: Starting sshd-keygen.service... Mar 17 18:35:31.803367 systemd[1]: Starting systemd-logind.service... Mar 17 18:35:31.804425 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:35:31.804485 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:35:31.805029 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 18:35:31.807634 systemd[1]: Starting update-engine.service... Mar 17 18:35:31.811308 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:35:31.815003 extend-filesystems[1177]: Found loop1 Mar 17 18:35:31.815003 extend-filesystems[1177]: Found sr0 Mar 17 18:35:31.815003 extend-filesystems[1177]: Found vda Mar 17 18:35:31.815003 extend-filesystems[1177]: Found vda1 Mar 17 18:35:31.815003 extend-filesystems[1177]: Found vda2 Mar 17 18:35:31.815003 extend-filesystems[1177]: Found vda3 Mar 17 18:35:31.815003 extend-filesystems[1177]: Found usr Mar 17 18:35:31.815003 extend-filesystems[1177]: Found vda4 Mar 17 18:35:31.815003 extend-filesystems[1177]: Found vda6 Mar 17 18:35:31.815003 extend-filesystems[1177]: Found vda7 Mar 17 18:35:31.815003 extend-filesystems[1177]: Found vda9 Mar 17 18:35:31.815003 extend-filesystems[1177]: Checking size of /dev/vda9 Mar 17 18:35:31.896219 jq[1195]: true Mar 17 18:35:31.896314 update_engine[1192]: I0317 18:35:31.889305 1192 main.cc:92] Flatcar Update Engine starting Mar 17 18:35:31.872575 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:35:31.896880 extend-filesystems[1177]: Resized partition /dev/vda9 Mar 17 18:35:31.873665 dbus-daemon[1175]: [system] SELinux support is enabled Mar 17 18:35:31.872848 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:35:31.899207 extend-filesystems[1202]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 18:35:31.874645 systemd[1]: Started dbus.service. Mar 17 18:35:31.883130 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:35:31.901511 jq[1206]: true Mar 17 18:35:31.883358 systemd[1]: Finished motdgen.service. Mar 17 18:35:31.889069 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:35:31.889337 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:35:31.904504 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 18:35:31.911848 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:35:31.911910 systemd[1]: Reached target system-config.target. Mar 17 18:35:31.915716 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:35:31.915764 systemd[1]: Reached target user-config.target. Mar 17 18:35:31.919738 tar[1205]: linux-amd64/helm Mar 17 18:35:31.921130 systemd[1]: Started update-engine.service. Mar 17 18:35:31.928659 systemd[1]: Started locksmithd.service. Mar 17 18:35:31.931143 update_engine[1192]: I0317 18:35:31.931097 1192 update_check_scheduler.cc:74] Next update check in 9m41s Mar 17 18:35:32.003427 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 18:35:32.055020 extend-filesystems[1202]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 18:35:32.055020 extend-filesystems[1202]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 18:35:32.055020 extend-filesystems[1202]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 18:35:32.077335 extend-filesystems[1177]: Resized filesystem in /dev/vda9 Mar 17 18:35:32.058674 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:35:32.092012 env[1207]: time="2025-03-17T18:35:32.055521973Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:35:32.092371 bash[1228]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:35:32.058925 systemd[1]: Finished extend-filesystems.service. Mar 17 18:35:32.095572 env[1207]: time="2025-03-17T18:35:32.092812075Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:35:32.095572 env[1207]: time="2025-03-17T18:35:32.093114239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:35:32.059904 systemd-logind[1189]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 18:35:32.059936 systemd-logind[1189]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 18:35:32.063506 systemd-logind[1189]: New seat seat0. Mar 17 18:35:32.079497 systemd[1]: Started systemd-logind.service. Mar 17 18:35:32.093632 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:35:32.100293 env[1207]: time="2025-03-17T18:35:32.099828029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:35:32.100293 env[1207]: time="2025-03-17T18:35:32.099885381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:35:32.100293 env[1207]: time="2025-03-17T18:35:32.100235371Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:35:32.100293 env[1207]: time="2025-03-17T18:35:32.100261442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:35:32.100293 env[1207]: time="2025-03-17T18:35:32.100280294Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:35:32.100293 env[1207]: time="2025-03-17T18:35:32.100294944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:35:32.101182 env[1207]: time="2025-03-17T18:35:32.100390723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:35:32.101182 env[1207]: time="2025-03-17T18:35:32.100763840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:35:32.101182 env[1207]: time="2025-03-17T18:35:32.100938913Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:35:32.101182 env[1207]: time="2025-03-17T18:35:32.100961695Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:35:32.101182 env[1207]: time="2025-03-17T18:35:32.101023468Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:35:32.101182 env[1207]: time="2025-03-17T18:35:32.101041650Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:35:32.165380 env[1207]: time="2025-03-17T18:35:32.164635180Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:35:32.165380 env[1207]: time="2025-03-17T18:35:32.164755552Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:35:32.165380 env[1207]: time="2025-03-17T18:35:32.164787681Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:35:32.165380 env[1207]: time="2025-03-17T18:35:32.164870822Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:35:32.165380 env[1207]: time="2025-03-17T18:35:32.164893971Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:35:32.165380 env[1207]: time="2025-03-17T18:35:32.164935467Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:35:32.165380 env[1207]: time="2025-03-17T18:35:32.164962011Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:35:32.165380 env[1207]: time="2025-03-17T18:35:32.164983073Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:35:32.165380 env[1207]: time="2025-03-17T18:35:32.165020987Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:35:32.165380 env[1207]: time="2025-03-17T18:35:32.165040688Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:35:32.165380 env[1207]: time="2025-03-17T18:35:32.165105113Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:35:32.165380 env[1207]: time="2025-03-17T18:35:32.165127957Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:35:32.165380 env[1207]: time="2025-03-17T18:35:32.165411991Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:35:32.165968 env[1207]: time="2025-03-17T18:35:32.165586457Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:35:32.166165 env[1207]: time="2025-03-17T18:35:32.166124043Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:35:32.166222 env[1207]: time="2025-03-17T18:35:32.166190595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:35:32.166222 env[1207]: time="2025-03-17T18:35:32.166210988Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:35:32.166347 env[1207]: time="2025-03-17T18:35:32.166303047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:35:32.166428 env[1207]: time="2025-03-17T18:35:32.166346272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:35:32.166428 env[1207]: time="2025-03-17T18:35:32.166366738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:35:32.166428 env[1207]: time="2025-03-17T18:35:32.166383242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:35:32.166538 env[1207]: time="2025-03-17T18:35:32.166425421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:35:32.166538 env[1207]: time="2025-03-17T18:35:32.166444409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:35:32.166538 env[1207]: time="2025-03-17T18:35:32.166461353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:35:32.166538 env[1207]: time="2025-03-17T18:35:32.166496584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:35:32.166538 env[1207]: time="2025-03-17T18:35:32.166517059Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:35:32.171180 env[1207]: time="2025-03-17T18:35:32.166718740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:35:32.171448 env[1207]: time="2025-03-17T18:35:32.171395162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:35:32.171594 env[1207]: time="2025-03-17T18:35:32.171544037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:35:32.171697 env[1207]: time="2025-03-17T18:35:32.171671944Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:35:32.171835 env[1207]: time="2025-03-17T18:35:32.171799569Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:35:32.171954 env[1207]: time="2025-03-17T18:35:32.171923861Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:35:32.172073 env[1207]: time="2025-03-17T18:35:32.172046885Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:35:32.172208 env[1207]: time="2025-03-17T18:35:32.172183291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:35:32.172675 env[1207]: time="2025-03-17T18:35:32.172598093Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:35:32.173815 env[1207]: time="2025-03-17T18:35:32.172852923Z" level=info msg="Connect containerd service" Mar 17 18:35:32.173815 env[1207]: time="2025-03-17T18:35:32.172919716Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:35:32.174628 env[1207]: time="2025-03-17T18:35:32.174596248Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:35:32.174871 env[1207]: time="2025-03-17T18:35:32.174825132Z" level=info msg="Start subscribing containerd event" Mar 17 18:35:32.175054 env[1207]: time="2025-03-17T18:35:32.175032104Z" level=info msg="Start recovering state" Mar 17 18:35:32.175378 env[1207]: time="2025-03-17T18:35:32.175357677Z" level=info msg="Start event monitor" Mar 17 18:35:32.175527 env[1207]: time="2025-03-17T18:35:32.175495006Z" level=info msg="Start snapshots syncer" Mar 17 18:35:32.175737 env[1207]: time="2025-03-17T18:35:32.175715872Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:35:32.175910 env[1207]: time="2025-03-17T18:35:32.175889154Z" level=info msg="Start streaming server" Mar 17 18:35:32.176555 env[1207]: time="2025-03-17T18:35:32.176533856Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:35:32.176783 env[1207]: time="2025-03-17T18:35:32.176763232Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:35:32.177111 systemd[1]: Started containerd.service. Mar 17 18:35:32.182584 env[1207]: time="2025-03-17T18:35:32.182513160Z" level=info msg="containerd successfully booted in 0.136150s" Mar 17 18:35:32.205791 locksmithd[1212]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:35:32.715452 tar[1205]: linux-amd64/LICENSE Mar 17 18:35:32.715847 tar[1205]: linux-amd64/README.md Mar 17 18:35:32.722032 systemd[1]: Finished prepare-helm.service. Mar 17 18:35:32.935159 sshd_keygen[1199]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:35:32.965164 systemd[1]: Finished sshd-keygen.service. Mar 17 18:35:32.979196 systemd[1]: Starting issuegen.service... Mar 17 18:35:32.990505 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:35:32.990690 systemd[1]: Finished issuegen.service. Mar 17 18:35:32.994023 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:35:33.002469 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:35:33.006053 systemd[1]: Started getty@tty1.service. Mar 17 18:35:33.009115 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 18:35:33.010843 systemd[1]: Reached target getty.target. Mar 17 18:35:33.402910 systemd[1]: Started kubelet.service. Mar 17 18:35:33.404505 systemd[1]: Reached target multi-user.target. Mar 17 18:35:33.407206 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:35:33.415860 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:35:33.416024 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:35:33.417560 systemd[1]: Startup finished in 705ms (kernel) + 1min 17.635s (initrd) + 10.991s (userspace) = 1min 29.332s. Mar 17 18:35:33.984124 kubelet[1256]: E0317 18:35:33.984022 1256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:35:33.986597 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:35:33.986775 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:35:33.987108 systemd[1]: kubelet.service: Consumed 1.654s CPU time. Mar 17 18:35:41.141573 systemd[1]: Created slice system-sshd.slice. Mar 17 18:35:41.143041 systemd[1]: Started sshd@0-10.0.0.22:22-10.0.0.1:56500.service. Mar 17 18:35:41.185943 sshd[1266]: Accepted publickey for core from 10.0.0.1 port 56500 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:35:41.188010 sshd[1266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:35:41.201333 systemd-logind[1189]: New session 1 of user core. Mar 17 18:35:41.202720 systemd[1]: Created slice user-500.slice. Mar 17 18:35:41.204292 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:35:41.218473 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:35:41.220206 systemd[1]: Starting user@500.service... Mar 17 18:35:41.224078 (systemd)[1269]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:35:41.333960 systemd[1269]: Queued start job for default target default.target. Mar 17 18:35:41.334720 systemd[1269]: Reached target paths.target. Mar 17 18:35:41.334747 systemd[1269]: Reached target sockets.target. Mar 17 18:35:41.334764 systemd[1269]: Reached target timers.target. Mar 17 18:35:41.334779 systemd[1269]: Reached target basic.target. Mar 17 18:35:41.334840 systemd[1269]: Reached target default.target. Mar 17 18:35:41.334872 systemd[1269]: Startup finished in 102ms. Mar 17 18:35:41.335046 systemd[1]: Started user@500.service. Mar 17 18:35:41.336557 systemd[1]: Started session-1.scope. Mar 17 18:35:41.393652 systemd[1]: Started sshd@1-10.0.0.22:22-10.0.0.1:56516.service. Mar 17 18:35:41.430713 sshd[1278]: Accepted publickey for core from 10.0.0.1 port 56516 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:35:41.432318 sshd[1278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:35:41.437603 systemd-logind[1189]: New session 2 of user core. Mar 17 18:35:41.438846 systemd[1]: Started session-2.scope. Mar 17 18:35:41.500151 sshd[1278]: pam_unix(sshd:session): session closed for user core Mar 17 18:35:41.504111 systemd[1]: sshd@1-10.0.0.22:22-10.0.0.1:56516.service: Deactivated successfully. Mar 17 18:35:41.504933 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:35:41.505690 systemd-logind[1189]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:35:41.507252 systemd[1]: Started sshd@2-10.0.0.22:22-10.0.0.1:56518.service. Mar 17 18:35:41.508586 systemd-logind[1189]: Removed session 2. Mar 17 18:35:41.542935 sshd[1284]: Accepted publickey for core from 10.0.0.1 port 56518 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:35:41.544470 sshd[1284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:35:41.549914 systemd-logind[1189]: New session 3 of user core. Mar 17 18:35:41.550988 systemd[1]: Started session-3.scope. Mar 17 18:35:41.607082 sshd[1284]: pam_unix(sshd:session): session closed for user core Mar 17 18:35:41.611083 systemd[1]: sshd@2-10.0.0.22:22-10.0.0.1:56518.service: Deactivated successfully. Mar 17 18:35:41.611834 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:35:41.612447 systemd-logind[1189]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:35:41.614160 systemd[1]: Started sshd@3-10.0.0.22:22-10.0.0.1:56530.service. Mar 17 18:35:41.615224 systemd-logind[1189]: Removed session 3. Mar 17 18:35:41.647527 sshd[1290]: Accepted publickey for core from 10.0.0.1 port 56530 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:35:41.648726 sshd[1290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:35:41.653223 systemd-logind[1189]: New session 4 of user core. Mar 17 18:35:41.654609 systemd[1]: Started session-4.scope. Mar 17 18:35:41.719195 sshd[1290]: pam_unix(sshd:session): session closed for user core Mar 17 18:35:41.723264 systemd[1]: sshd@3-10.0.0.22:22-10.0.0.1:56530.service: Deactivated successfully. Mar 17 18:35:41.724010 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:35:41.724723 systemd-logind[1189]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:35:41.726084 systemd[1]: Started sshd@4-10.0.0.22:22-10.0.0.1:56546.service. Mar 17 18:35:41.727649 systemd-logind[1189]: Removed session 4. Mar 17 18:35:41.762835 sshd[1296]: Accepted publickey for core from 10.0.0.1 port 56546 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:35:41.764509 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:35:41.769473 systemd-logind[1189]: New session 5 of user core. Mar 17 18:35:41.770600 systemd[1]: Started session-5.scope. Mar 17 18:35:41.832695 sudo[1300]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:35:41.832973 sudo[1300]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:35:41.872692 systemd[1]: Starting docker.service... Mar 17 18:35:42.162333 env[1311]: time="2025-03-17T18:35:42.162251227Z" level=info msg="Starting up" Mar 17 18:35:42.164296 env[1311]: time="2025-03-17T18:35:42.164250141Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:35:42.164296 env[1311]: time="2025-03-17T18:35:42.164282385Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:35:42.164464 env[1311]: time="2025-03-17T18:35:42.164308718Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:35:42.164464 env[1311]: time="2025-03-17T18:35:42.164325094Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:35:42.166708 env[1311]: time="2025-03-17T18:35:42.166671622Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:35:42.166708 env[1311]: time="2025-03-17T18:35:42.166699112Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:35:42.166883 env[1311]: time="2025-03-17T18:35:42.166719371Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:35:42.166883 env[1311]: time="2025-03-17T18:35:42.166731995Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:35:42.670628 env[1311]: time="2025-03-17T18:35:42.670556401Z" level=info msg="Loading containers: start." Mar 17 18:35:42.810427 kernel: Initializing XFRM netlink socket Mar 17 18:35:42.846938 env[1311]: time="2025-03-17T18:35:42.846860456Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:35:42.906078 systemd-networkd[1021]: docker0: Link UP Mar 17 18:35:42.930245 env[1311]: time="2025-03-17T18:35:42.930086485Z" level=info msg="Loading containers: done." Mar 17 18:35:42.945086 env[1311]: time="2025-03-17T18:35:42.945004281Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:35:42.945357 env[1311]: time="2025-03-17T18:35:42.945259582Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:35:42.945453 env[1311]: time="2025-03-17T18:35:42.945384131Z" level=info msg="Daemon has completed initialization" Mar 17 18:35:42.971612 systemd[1]: Started docker.service. Mar 17 18:35:42.982000 env[1311]: time="2025-03-17T18:35:42.981898023Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:35:43.836517 env[1207]: time="2025-03-17T18:35:43.836451532Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 17 18:35:44.017933 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:35:44.018169 systemd[1]: Stopped kubelet.service. Mar 17 18:35:44.018213 systemd[1]: kubelet.service: Consumed 1.654s CPU time. Mar 17 18:35:44.019595 systemd[1]: Starting kubelet.service... Mar 17 18:35:44.123457 systemd[1]: Started kubelet.service. Mar 17 18:35:44.427015 kubelet[1445]: E0317 18:35:44.426873 1445 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:35:44.429949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:35:44.430066 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:35:44.748156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1611816744.mount: Deactivated successfully. Mar 17 18:35:46.467670 env[1207]: time="2025-03-17T18:35:46.467593582Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:46.469663 env[1207]: time="2025-03-17T18:35:46.469613492Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:46.471473 env[1207]: time="2025-03-17T18:35:46.471417612Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:46.473047 env[1207]: time="2025-03-17T18:35:46.473022136Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:46.473779 env[1207]: time="2025-03-17T18:35:46.473737631Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\"" Mar 17 18:35:46.475256 env[1207]: time="2025-03-17T18:35:46.475231350Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 17 18:35:48.452104 env[1207]: time="2025-03-17T18:35:48.452034083Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:48.454123 env[1207]: time="2025-03-17T18:35:48.454006160Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:48.455726 env[1207]: time="2025-03-17T18:35:48.455686236Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:48.457435 env[1207]: time="2025-03-17T18:35:48.457383044Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:48.458220 env[1207]: time="2025-03-17T18:35:48.458179124Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\"" Mar 17 18:35:48.458841 env[1207]: time="2025-03-17T18:35:48.458775976Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 17 18:35:50.815606 env[1207]: time="2025-03-17T18:35:50.815531617Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:50.817279 env[1207]: time="2025-03-17T18:35:50.817229247Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:50.819201 env[1207]: time="2025-03-17T18:35:50.819167653Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:50.821080 env[1207]: time="2025-03-17T18:35:50.821039147Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:50.821674 env[1207]: time="2025-03-17T18:35:50.821644581Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\"" Mar 17 18:35:50.822166 env[1207]: time="2025-03-17T18:35:50.822133235Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 18:35:52.375910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2698600644.mount: Deactivated successfully. Mar 17 18:35:54.468960 env[1207]: time="2025-03-17T18:35:54.468889581Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:54.497914 env[1207]: time="2025-03-17T18:35:54.497849916Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:54.517834 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:35:54.518004 systemd[1]: Stopped kubelet.service. Mar 17 18:35:54.518436 env[1207]: time="2025-03-17T18:35:54.518362947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:54.519240 systemd[1]: Starting kubelet.service... Mar 17 18:35:54.523508 env[1207]: time="2025-03-17T18:35:54.523468907Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:54.523872 env[1207]: time="2025-03-17T18:35:54.523833554Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\"" Mar 17 18:35:54.524454 env[1207]: time="2025-03-17T18:35:54.524419433Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:35:54.596702 systemd[1]: Started kubelet.service. Mar 17 18:35:54.773350 kubelet[1458]: E0317 18:35:54.773292 1458 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:35:54.775092 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:35:54.775219 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:35:55.174613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount579378950.mount: Deactivated successfully. Mar 17 18:35:56.311895 env[1207]: time="2025-03-17T18:35:56.311842378Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:56.313797 env[1207]: time="2025-03-17T18:35:56.313764541Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:56.315535 env[1207]: time="2025-03-17T18:35:56.315510396Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:56.317514 env[1207]: time="2025-03-17T18:35:56.317466045Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:56.318115 env[1207]: time="2025-03-17T18:35:56.318083338Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 18:35:56.318628 env[1207]: time="2025-03-17T18:35:56.318609592Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 18:35:57.002585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount925257647.mount: Deactivated successfully. Mar 17 18:35:57.140637 env[1207]: time="2025-03-17T18:35:57.140588200Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:57.184477 env[1207]: time="2025-03-17T18:35:57.184435507Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:57.190421 env[1207]: time="2025-03-17T18:35:57.190379110Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:57.191874 env[1207]: time="2025-03-17T18:35:57.191816391Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:35:57.192226 env[1207]: time="2025-03-17T18:35:57.192192658Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 17 18:35:57.192863 env[1207]: time="2025-03-17T18:35:57.192814534Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 17 18:35:57.841101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount461789603.mount: Deactivated successfully. Mar 17 18:36:01.321030 env[1207]: time="2025-03-17T18:36:01.320978460Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:01.323075 env[1207]: time="2025-03-17T18:36:01.323036179Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:01.324811 env[1207]: time="2025-03-17T18:36:01.324778035Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:01.326841 env[1207]: time="2025-03-17T18:36:01.326809110Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:01.327688 env[1207]: time="2025-03-17T18:36:01.327637834Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Mar 17 18:36:03.376526 systemd[1]: Stopped kubelet.service. Mar 17 18:36:03.378361 systemd[1]: Starting kubelet.service... Mar 17 18:36:03.401664 systemd[1]: Reloading. Mar 17 18:36:03.464622 /usr/lib/systemd/system-generators/torcx-generator[1512]: time="2025-03-17T18:36:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:36:03.465155 /usr/lib/systemd/system-generators/torcx-generator[1512]: time="2025-03-17T18:36:03Z" level=info msg="torcx already run" Mar 17 18:36:03.746809 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:36:03.746826 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:36:03.763330 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:36:03.835979 systemd[1]: Started kubelet.service. Mar 17 18:36:03.837420 systemd[1]: Stopping kubelet.service... Mar 17 18:36:03.837663 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:36:03.837846 systemd[1]: Stopped kubelet.service. Mar 17 18:36:03.839063 systemd[1]: Starting kubelet.service... Mar 17 18:36:03.911601 systemd[1]: Started kubelet.service. Mar 17 18:36:03.945840 kubelet[1559]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:36:03.945840 kubelet[1559]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:36:03.945840 kubelet[1559]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:36:03.946140 kubelet[1559]: I0317 18:36:03.945877 1559 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:36:04.429278 kubelet[1559]: I0317 18:36:04.429239 1559 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 18:36:04.429278 kubelet[1559]: I0317 18:36:04.429268 1559 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:36:04.429676 kubelet[1559]: I0317 18:36:04.429657 1559 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 18:36:04.449567 kubelet[1559]: I0317 18:36:04.449526 1559 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:36:04.450049 kubelet[1559]: E0317 18:36:04.450015 1559 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:36:04.455318 kubelet[1559]: E0317 18:36:04.455289 1559 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:36:04.455318 kubelet[1559]: I0317 18:36:04.455315 1559 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:36:04.459746 kubelet[1559]: I0317 18:36:04.459723 1559 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:36:04.460788 kubelet[1559]: I0317 18:36:04.460757 1559 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 18:36:04.460924 kubelet[1559]: I0317 18:36:04.460893 1559 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:36:04.461073 kubelet[1559]: I0317 18:36:04.460920 1559 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:36:04.461073 kubelet[1559]: I0317 18:36:04.461069 1559 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:36:04.461183 kubelet[1559]: I0317 18:36:04.461077 1559 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 18:36:04.461209 kubelet[1559]: I0317 18:36:04.461184 1559 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:36:04.463645 kubelet[1559]: I0317 18:36:04.463620 1559 kubelet.go:408] "Attempting to sync node with API server" Mar 17 18:36:04.463645 kubelet[1559]: I0317 18:36:04.463640 1559 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:36:04.463720 kubelet[1559]: I0317 18:36:04.463671 1559 kubelet.go:314] "Adding apiserver pod source" Mar 17 18:36:04.463720 kubelet[1559]: I0317 18:36:04.463686 1559 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:36:04.480983 kubelet[1559]: W0317 18:36:04.480936 1559 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Mar 17 18:36:04.481046 kubelet[1559]: E0317 18:36:04.480993 1559 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:36:04.482996 kubelet[1559]: W0317 18:36:04.482965 1559 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Mar 17 18:36:04.483047 kubelet[1559]: E0317 18:36:04.483001 1559 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:36:04.487049 kubelet[1559]: I0317 18:36:04.487028 1559 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:36:04.490930 kubelet[1559]: I0317 18:36:04.490905 1559 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:36:04.491409 kubelet[1559]: W0317 18:36:04.491366 1559 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:36:04.491926 kubelet[1559]: I0317 18:36:04.491905 1559 server.go:1269] "Started kubelet" Mar 17 18:36:04.494849 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:36:04.494907 kubelet[1559]: I0317 18:36:04.494870 1559 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:36:04.494942 kubelet[1559]: I0317 18:36:04.494931 1559 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:36:04.495140 kubelet[1559]: I0317 18:36:04.495090 1559 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:36:04.495710 kubelet[1559]: I0317 18:36:04.495687 1559 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 18:36:04.495862 kubelet[1559]: E0317 18:36:04.495773 1559 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:36:04.495958 kubelet[1559]: I0317 18:36:04.495933 1559 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:36:04.495988 kubelet[1559]: I0317 18:36:04.495419 1559 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:36:04.496027 kubelet[1559]: I0317 18:36:04.496016 1559 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 18:36:04.496091 kubelet[1559]: I0317 18:36:04.496080 1559 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:36:04.496734 kubelet[1559]: W0317 18:36:04.496648 1559 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Mar 17 18:36:04.496734 kubelet[1559]: E0317 18:36:04.496705 1559 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:36:04.496821 kubelet[1559]: E0317 18:36:04.496781 1559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="200ms" Mar 17 18:36:04.496986 kubelet[1559]: I0317 18:36:04.496962 1559 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:36:04.497050 kubelet[1559]: I0317 18:36:04.497035 1559 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:36:04.500645 kubelet[1559]: I0317 18:36:04.500631 1559 server.go:460] "Adding debug handlers to kubelet server" Mar 17 18:36:04.500865 kubelet[1559]: E0317 18:36:04.499532 1559 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.22:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.22:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182daaecb53c2a93 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 18:36:04.491872915 +0000 UTC m=+0.576760998,LastTimestamp:2025-03-17 18:36:04.491872915 +0000 UTC m=+0.576760998,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 18:36:04.500865 kubelet[1559]: I0317 18:36:04.500690 1559 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:36:04.501630 kubelet[1559]: E0317 18:36:04.501605 1559 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:36:04.510187 kubelet[1559]: I0317 18:36:04.510159 1559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:36:04.511078 kubelet[1559]: I0317 18:36:04.511048 1559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:36:04.511078 kubelet[1559]: I0317 18:36:04.511079 1559 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:36:04.511153 kubelet[1559]: I0317 18:36:04.511097 1559 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 18:36:04.511153 kubelet[1559]: E0317 18:36:04.511130 1559 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:36:04.512936 kubelet[1559]: W0317 18:36:04.512895 1559 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Mar 17 18:36:04.512992 kubelet[1559]: E0317 18:36:04.512942 1559 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:36:04.513047 kubelet[1559]: I0317 18:36:04.513035 1559 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:36:04.513047 kubelet[1559]: I0317 18:36:04.513045 1559 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:36:04.513087 kubelet[1559]: I0317 18:36:04.513058 1559 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:36:04.595882 kubelet[1559]: E0317 18:36:04.595858 1559 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:36:04.612225 kubelet[1559]: E0317 18:36:04.612182 1559 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:36:04.696613 kubelet[1559]: E0317 18:36:04.696513 1559 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:36:04.697961 kubelet[1559]: E0317 18:36:04.697912 1559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="400ms" Mar 17 18:36:04.797190 kubelet[1559]: E0317 18:36:04.797117 1559 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:36:04.812414 kubelet[1559]: E0317 18:36:04.812371 1559 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:36:04.897786 kubelet[1559]: E0317 18:36:04.897724 1559 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:36:04.955689 kubelet[1559]: I0317 18:36:04.955579 1559 policy_none.go:49] "None policy: Start" Mar 17 18:36:04.956226 kubelet[1559]: I0317 18:36:04.956196 1559 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:36:04.956226 kubelet[1559]: I0317 18:36:04.956214 1559 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:36:04.997953 kubelet[1559]: E0317 18:36:04.997920 1559 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:36:05.015375 systemd[1]: Created slice kubepods.slice. Mar 17 18:36:05.018908 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 18:36:05.021343 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 18:36:05.027043 kubelet[1559]: I0317 18:36:05.027007 1559 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:36:05.027144 kubelet[1559]: I0317 18:36:05.027134 1559 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:36:05.027173 kubelet[1559]: I0317 18:36:05.027145 1559 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:36:05.027509 kubelet[1559]: I0317 18:36:05.027492 1559 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:36:05.029021 kubelet[1559]: E0317 18:36:05.028970 1559 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 18:36:05.098947 kubelet[1559]: E0317 18:36:05.098904 1559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="800ms" Mar 17 18:36:05.128914 kubelet[1559]: I0317 18:36:05.128880 1559 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 18:36:05.129131 kubelet[1559]: E0317 18:36:05.129104 1559 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Mar 17 18:36:05.218617 systemd[1]: Created slice kubepods-burstable-pod5d70effa396ed5873d5a86d7ffce92e9.slice. Mar 17 18:36:05.226267 systemd[1]: Created slice kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice. Mar 17 18:36:05.228985 systemd[1]: Created slice kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice. Mar 17 18:36:05.294887 kubelet[1559]: W0317 18:36:05.294796 1559 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Mar 17 18:36:05.295045 kubelet[1559]: E0317 18:36:05.294895 1559 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:36:05.301333 kubelet[1559]: I0317 18:36:05.301297 1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d70effa396ed5873d5a86d7ffce92e9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5d70effa396ed5873d5a86d7ffce92e9\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:36:05.301333 kubelet[1559]: I0317 18:36:05.301330 1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:05.301530 kubelet[1559]: I0317 18:36:05.301356 1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:05.301530 kubelet[1559]: I0317 18:36:05.301378 1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d70effa396ed5873d5a86d7ffce92e9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5d70effa396ed5873d5a86d7ffce92e9\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:36:05.301530 kubelet[1559]: I0317 18:36:05.301416 1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d70effa396ed5873d5a86d7ffce92e9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5d70effa396ed5873d5a86d7ffce92e9\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:36:05.301530 kubelet[1559]: I0317 18:36:05.301433 1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:05.301530 kubelet[1559]: I0317 18:36:05.301452 1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:05.301654 kubelet[1559]: I0317 18:36:05.301468 1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:05.301654 kubelet[1559]: I0317 18:36:05.301487 1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 17 18:36:05.330421 kubelet[1559]: I0317 18:36:05.330378 1559 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 18:36:05.330751 kubelet[1559]: E0317 18:36:05.330730 1559 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Mar 17 18:36:05.336063 kubelet[1559]: W0317 18:36:05.336024 1559 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Mar 17 18:36:05.336115 kubelet[1559]: E0317 18:36:05.336069 1559 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:36:05.524895 kubelet[1559]: E0317 18:36:05.524861 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:05.525519 env[1207]: time="2025-03-17T18:36:05.525476835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5d70effa396ed5873d5a86d7ffce92e9,Namespace:kube-system,Attempt:0,}" Mar 17 18:36:05.528536 kubelet[1559]: E0317 18:36:05.528504 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:05.528842 env[1207]: time="2025-03-17T18:36:05.528809249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,}" Mar 17 18:36:05.531000 kubelet[1559]: E0317 18:36:05.530979 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:05.531234 env[1207]: time="2025-03-17T18:36:05.531208271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,}" Mar 17 18:36:05.668338 kubelet[1559]: W0317 18:36:05.668290 1559 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Mar 17 18:36:05.668523 kubelet[1559]: E0317 18:36:05.668346 1559 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:36:05.732155 kubelet[1559]: I0317 18:36:05.732107 1559 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 18:36:05.732453 kubelet[1559]: E0317 18:36:05.732430 1559 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Mar 17 18:36:05.899647 kubelet[1559]: E0317 18:36:05.899543 1559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="1.6s" Mar 17 18:36:05.965430 kubelet[1559]: W0317 18:36:05.965382 1559 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Mar 17 18:36:05.965738 kubelet[1559]: E0317 18:36:05.965436 1559 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:36:06.506562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2462402206.mount: Deactivated successfully. Mar 17 18:36:06.512537 env[1207]: time="2025-03-17T18:36:06.512469672Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:06.513466 env[1207]: time="2025-03-17T18:36:06.513439998Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:06.515997 env[1207]: time="2025-03-17T18:36:06.515946271Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:06.517983 env[1207]: time="2025-03-17T18:36:06.517939218Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:06.519407 env[1207]: time="2025-03-17T18:36:06.519342738Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:06.520617 env[1207]: time="2025-03-17T18:36:06.520579022Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:06.521859 env[1207]: time="2025-03-17T18:36:06.521816590Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:06.523529 env[1207]: time="2025-03-17T18:36:06.523502685Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:06.525710 env[1207]: time="2025-03-17T18:36:06.525671110Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:06.527622 env[1207]: time="2025-03-17T18:36:06.527581257Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:06.528244 env[1207]: time="2025-03-17T18:36:06.528210766Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:06.528870 env[1207]: time="2025-03-17T18:36:06.528832684Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:06.533852 kubelet[1559]: I0317 18:36:06.533820 1559 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 18:36:06.534173 kubelet[1559]: E0317 18:36:06.534148 1559 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Mar 17 18:36:06.546943 env[1207]: time="2025-03-17T18:36:06.546845299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:36:06.547063 env[1207]: time="2025-03-17T18:36:06.546950992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:36:06.547063 env[1207]: time="2025-03-17T18:36:06.547002114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:36:06.547337 env[1207]: time="2025-03-17T18:36:06.547259650Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2c66b20c317e98aa9582e91a8cbee4d336f048da0afc035c4a3667bc81b4689c pid=1603 runtime=io.containerd.runc.v2 Mar 17 18:36:06.551499 env[1207]: time="2025-03-17T18:36:06.551295162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:36:06.551499 env[1207]: time="2025-03-17T18:36:06.551329778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:36:06.551499 env[1207]: time="2025-03-17T18:36:06.551339074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:36:06.551499 env[1207]: time="2025-03-17T18:36:06.551465605Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c78f943c50bfa813d4248c566b1e5e1f1f6c21165292298c0ac6a6334b9d3d4e pid=1625 runtime=io.containerd.runc.v2 Mar 17 18:36:06.553627 env[1207]: time="2025-03-17T18:36:06.553486781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:36:06.553627 env[1207]: time="2025-03-17T18:36:06.553514398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:36:06.553627 env[1207]: time="2025-03-17T18:36:06.553526963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:36:06.554314 env[1207]: time="2025-03-17T18:36:06.553800041Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a360d4eb8ef298d736e281d9e12933033f78ddbeaf4c4765155c6ae9a9b488c7 pid=1626 runtime=io.containerd.runc.v2 Mar 17 18:36:06.565541 systemd[1]: Started cri-containerd-c78f943c50bfa813d4248c566b1e5e1f1f6c21165292298c0ac6a6334b9d3d4e.scope. Mar 17 18:36:06.572191 systemd[1]: Started cri-containerd-2c66b20c317e98aa9582e91a8cbee4d336f048da0afc035c4a3667bc81b4689c.scope. Mar 17 18:36:06.574484 systemd[1]: Started cri-containerd-a360d4eb8ef298d736e281d9e12933033f78ddbeaf4c4765155c6ae9a9b488c7.scope. Mar 17 18:36:06.611571 env[1207]: time="2025-03-17T18:36:06.611516717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5d70effa396ed5873d5a86d7ffce92e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c66b20c317e98aa9582e91a8cbee4d336f048da0afc035c4a3667bc81b4689c\"" Mar 17 18:36:06.612540 kubelet[1559]: E0317 18:36:06.612504 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:06.614625 env[1207]: time="2025-03-17T18:36:06.614590386Z" level=info msg="CreateContainer within sandbox \"2c66b20c317e98aa9582e91a8cbee4d336f048da0afc035c4a3667bc81b4689c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:36:06.615844 env[1207]: time="2025-03-17T18:36:06.615812943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c78f943c50bfa813d4248c566b1e5e1f1f6c21165292298c0ac6a6334b9d3d4e\"" Mar 17 18:36:06.616907 kubelet[1559]: E0317 18:36:06.616881 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:06.617047 env[1207]: time="2025-03-17T18:36:06.617024649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a360d4eb8ef298d736e281d9e12933033f78ddbeaf4c4765155c6ae9a9b488c7\"" Mar 17 18:36:06.617629 kubelet[1559]: E0317 18:36:06.617611 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:06.619229 env[1207]: time="2025-03-17T18:36:06.619201196Z" level=info msg="CreateContainer within sandbox \"c78f943c50bfa813d4248c566b1e5e1f1f6c21165292298c0ac6a6334b9d3d4e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:36:06.619897 env[1207]: time="2025-03-17T18:36:06.619864108Z" level=info msg="CreateContainer within sandbox \"a360d4eb8ef298d736e281d9e12933033f78ddbeaf4c4765155c6ae9a9b488c7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:36:06.637687 env[1207]: time="2025-03-17T18:36:06.637633808Z" level=info msg="CreateContainer within sandbox \"2c66b20c317e98aa9582e91a8cbee4d336f048da0afc035c4a3667bc81b4689c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"92459dfd3e40bb966070e1d89eb9df39b5e7311c765deaf63d8e7ee75e3403b4\"" Mar 17 18:36:06.639138 env[1207]: time="2025-03-17T18:36:06.639115013Z" level=info msg="StartContainer for \"92459dfd3e40bb966070e1d89eb9df39b5e7311c765deaf63d8e7ee75e3403b4\"" Mar 17 18:36:06.643332 env[1207]: time="2025-03-17T18:36:06.643305326Z" level=info msg="CreateContainer within sandbox \"c78f943c50bfa813d4248c566b1e5e1f1f6c21165292298c0ac6a6334b9d3d4e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9f538f0fadd1ec377ef3ec8f0885514ca8f80c76ad3ec7f5f2a76fce3408bc6a\"" Mar 17 18:36:06.643834 env[1207]: time="2025-03-17T18:36:06.643702991Z" level=info msg="StartContainer for \"9f538f0fadd1ec377ef3ec8f0885514ca8f80c76ad3ec7f5f2a76fce3408bc6a\"" Mar 17 18:36:06.647085 env[1207]: time="2025-03-17T18:36:06.647057449Z" level=info msg="CreateContainer within sandbox \"a360d4eb8ef298d736e281d9e12933033f78ddbeaf4c4765155c6ae9a9b488c7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"16b6bcdc16a1a33a323359ec018b186a50e8ef3aca0fd2f19ead557e0d4a82f8\"" Mar 17 18:36:06.648431 kubelet[1559]: E0317 18:36:06.648401 1559 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:36:06.648713 env[1207]: time="2025-03-17T18:36:06.648632857Z" level=info msg="StartContainer for \"16b6bcdc16a1a33a323359ec018b186a50e8ef3aca0fd2f19ead557e0d4a82f8\"" Mar 17 18:36:06.654815 systemd[1]: Started cri-containerd-92459dfd3e40bb966070e1d89eb9df39b5e7311c765deaf63d8e7ee75e3403b4.scope. Mar 17 18:36:06.660417 systemd[1]: Started cri-containerd-9f538f0fadd1ec377ef3ec8f0885514ca8f80c76ad3ec7f5f2a76fce3408bc6a.scope. Mar 17 18:36:06.668904 systemd[1]: Started cri-containerd-16b6bcdc16a1a33a323359ec018b186a50e8ef3aca0fd2f19ead557e0d4a82f8.scope. Mar 17 18:36:06.712433 env[1207]: time="2025-03-17T18:36:06.709676746Z" level=info msg="StartContainer for \"9f538f0fadd1ec377ef3ec8f0885514ca8f80c76ad3ec7f5f2a76fce3408bc6a\" returns successfully" Mar 17 18:36:06.712433 env[1207]: time="2025-03-17T18:36:06.709990658Z" level=info msg="StartContainer for \"92459dfd3e40bb966070e1d89eb9df39b5e7311c765deaf63d8e7ee75e3403b4\" returns successfully" Mar 17 18:36:06.715345 env[1207]: time="2025-03-17T18:36:06.715284697Z" level=info msg="StartContainer for \"16b6bcdc16a1a33a323359ec018b186a50e8ef3aca0fd2f19ead557e0d4a82f8\" returns successfully" Mar 17 18:36:07.521199 kubelet[1559]: E0317 18:36:07.521171 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:07.523465 kubelet[1559]: E0317 18:36:07.523305 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:07.524576 kubelet[1559]: E0317 18:36:07.524559 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:07.722880 kubelet[1559]: E0317 18:36:07.722833 1559 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 17 18:36:08.055653 kubelet[1559]: E0317 18:36:08.055597 1559 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 17 18:36:08.135470 kubelet[1559]: I0317 18:36:08.135438 1559 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 18:36:08.174528 kubelet[1559]: I0317 18:36:08.174478 1559 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 17 18:36:08.465805 kubelet[1559]: I0317 18:36:08.465679 1559 apiserver.go:52] "Watching apiserver" Mar 17 18:36:08.496753 kubelet[1559]: I0317 18:36:08.496688 1559 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 18:36:08.531305 kubelet[1559]: E0317 18:36:08.531255 1559 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 17 18:36:08.531749 kubelet[1559]: E0317 18:36:08.531478 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:09.845028 systemd[1]: Reloading. Mar 17 18:36:09.913170 /usr/lib/systemd/system-generators/torcx-generator[1852]: time="2025-03-17T18:36:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:36:09.913195 /usr/lib/systemd/system-generators/torcx-generator[1852]: time="2025-03-17T18:36:09Z" level=info msg="torcx already run" Mar 17 18:36:09.971488 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:36:09.971503 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:36:09.988445 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:36:10.077790 systemd[1]: Stopping kubelet.service... Mar 17 18:36:10.100904 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:36:10.101126 systemd[1]: Stopped kubelet.service. Mar 17 18:36:10.102791 systemd[1]: Starting kubelet.service... Mar 17 18:36:10.184282 systemd[1]: Started kubelet.service. Mar 17 18:36:10.227069 kubelet[1897]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:36:10.227069 kubelet[1897]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:36:10.227069 kubelet[1897]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:36:10.227446 kubelet[1897]: I0317 18:36:10.227117 1897 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:36:10.232776 kubelet[1897]: I0317 18:36:10.232738 1897 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 18:36:10.232776 kubelet[1897]: I0317 18:36:10.232766 1897 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:36:10.233032 kubelet[1897]: I0317 18:36:10.233018 1897 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 18:36:10.234192 kubelet[1897]: I0317 18:36:10.234157 1897 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:36:10.235810 kubelet[1897]: I0317 18:36:10.235794 1897 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:36:10.239348 kubelet[1897]: E0317 18:36:10.239314 1897 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:36:10.242680 kubelet[1897]: I0317 18:36:10.242657 1897 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:36:10.246808 kubelet[1897]: I0317 18:36:10.246783 1897 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:36:10.246911 kubelet[1897]: I0317 18:36:10.246894 1897 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 18:36:10.247017 kubelet[1897]: I0317 18:36:10.246991 1897 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:36:10.249183 kubelet[1897]: I0317 18:36:10.247016 1897 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:36:10.249183 kubelet[1897]: I0317 18:36:10.249169 1897 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:36:10.249183 kubelet[1897]: I0317 18:36:10.249180 1897 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 18:36:10.249490 kubelet[1897]: I0317 18:36:10.249210 1897 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:36:10.249490 kubelet[1897]: I0317 18:36:10.249305 1897 kubelet.go:408] "Attempting to sync node with API server" Mar 17 18:36:10.249490 kubelet[1897]: I0317 18:36:10.249316 1897 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:36:10.249490 kubelet[1897]: I0317 18:36:10.249338 1897 kubelet.go:314] "Adding apiserver pod source" Mar 17 18:36:10.249490 kubelet[1897]: I0317 18:36:10.249349 1897 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:36:10.250697 kubelet[1897]: I0317 18:36:10.250670 1897 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:36:10.251229 kubelet[1897]: I0317 18:36:10.251210 1897 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:36:10.251608 kubelet[1897]: I0317 18:36:10.251588 1897 server.go:1269] "Started kubelet" Mar 17 18:36:10.255313 kubelet[1897]: I0317 18:36:10.252560 1897 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:36:10.257509 kubelet[1897]: I0317 18:36:10.257482 1897 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:36:10.258960 kubelet[1897]: I0317 18:36:10.258947 1897 server.go:460] "Adding debug handlers to kubelet server" Mar 17 18:36:10.262923 kubelet[1897]: I0317 18:36:10.262864 1897 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:36:10.263072 kubelet[1897]: I0317 18:36:10.263051 1897 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:36:10.263300 kubelet[1897]: I0317 18:36:10.263249 1897 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:36:10.263785 kubelet[1897]: I0317 18:36:10.263769 1897 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 18:36:10.264550 kubelet[1897]: I0317 18:36:10.264536 1897 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 18:36:10.264735 kubelet[1897]: I0317 18:36:10.264724 1897 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:36:10.265326 kubelet[1897]: I0317 18:36:10.265120 1897 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:36:10.266097 kubelet[1897]: E0317 18:36:10.265465 1897 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:36:10.266097 kubelet[1897]: I0317 18:36:10.265631 1897 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:36:10.267884 kubelet[1897]: I0317 18:36:10.267856 1897 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:36:10.268410 kubelet[1897]: I0317 18:36:10.268331 1897 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:36:10.269996 kubelet[1897]: I0317 18:36:10.269964 1897 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:36:10.270052 kubelet[1897]: I0317 18:36:10.270002 1897 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:36:10.270052 kubelet[1897]: I0317 18:36:10.270026 1897 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 18:36:10.271222 kubelet[1897]: E0317 18:36:10.271196 1897 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:36:10.291789 kubelet[1897]: I0317 18:36:10.291754 1897 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:36:10.291789 kubelet[1897]: I0317 18:36:10.291770 1897 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:36:10.291789 kubelet[1897]: I0317 18:36:10.291785 1897 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:36:10.291957 kubelet[1897]: I0317 18:36:10.291920 1897 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:36:10.291957 kubelet[1897]: I0317 18:36:10.291928 1897 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:36:10.291957 kubelet[1897]: I0317 18:36:10.291944 1897 policy_none.go:49] "None policy: Start" Mar 17 18:36:10.292402 kubelet[1897]: I0317 18:36:10.292373 1897 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:36:10.292402 kubelet[1897]: I0317 18:36:10.292401 1897 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:36:10.292523 kubelet[1897]: I0317 18:36:10.292511 1897 state_mem.go:75] "Updated machine memory state" Mar 17 18:36:10.295552 kubelet[1897]: I0317 18:36:10.295527 1897 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:36:10.295701 kubelet[1897]: I0317 18:36:10.295677 1897 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:36:10.295737 kubelet[1897]: I0317 18:36:10.295694 1897 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:36:10.296099 kubelet[1897]: I0317 18:36:10.296080 1897 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:36:10.402637 kubelet[1897]: I0317 18:36:10.401149 1897 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 18:36:10.407955 kubelet[1897]: I0317 18:36:10.407921 1897 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Mar 17 18:36:10.408061 kubelet[1897]: I0317 18:36:10.407999 1897 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 17 18:36:10.468216 kubelet[1897]: I0317 18:36:10.468136 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:10.468428 kubelet[1897]: I0317 18:36:10.468232 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 17 18:36:10.468428 kubelet[1897]: I0317 18:36:10.468263 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d70effa396ed5873d5a86d7ffce92e9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5d70effa396ed5873d5a86d7ffce92e9\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:36:10.468428 kubelet[1897]: I0317 18:36:10.468290 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:10.468428 kubelet[1897]: I0317 18:36:10.468315 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:10.468428 kubelet[1897]: I0317 18:36:10.468337 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:10.468594 kubelet[1897]: I0317 18:36:10.468358 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:10.468594 kubelet[1897]: I0317 18:36:10.468381 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d70effa396ed5873d5a86d7ffce92e9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5d70effa396ed5873d5a86d7ffce92e9\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:36:10.468594 kubelet[1897]: I0317 18:36:10.468435 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d70effa396ed5873d5a86d7ffce92e9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5d70effa396ed5873d5a86d7ffce92e9\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:36:10.677642 kubelet[1897]: E0317 18:36:10.677486 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:10.677642 kubelet[1897]: E0317 18:36:10.677558 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:10.677642 kubelet[1897]: E0317 18:36:10.677497 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:10.861497 sudo[1930]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:36:10.861785 sudo[1930]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 18:36:11.250033 kubelet[1897]: I0317 18:36:11.249980 1897 apiserver.go:52] "Watching apiserver" Mar 17 18:36:11.265243 kubelet[1897]: I0317 18:36:11.265203 1897 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 18:36:11.284279 kubelet[1897]: E0317 18:36:11.284255 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:11.284523 kubelet[1897]: E0317 18:36:11.284309 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:11.290903 kubelet[1897]: E0317 18:36:11.290863 1897 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 17 18:36:11.291051 kubelet[1897]: E0317 18:36:11.291024 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:11.317504 kubelet[1897]: I0317 18:36:11.317445 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.317425101 podStartE2EDuration="1.317425101s" podCreationTimestamp="2025-03-17 18:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:36:11.307293701 +0000 UTC m=+1.108795006" watchObservedRunningTime="2025-03-17 18:36:11.317425101 +0000 UTC m=+1.118926406" Mar 17 18:36:11.326691 kubelet[1897]: I0317 18:36:11.326635 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.326615822 podStartE2EDuration="1.326615822s" podCreationTimestamp="2025-03-17 18:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:36:11.318266826 +0000 UTC m=+1.119768132" watchObservedRunningTime="2025-03-17 18:36:11.326615822 +0000 UTC m=+1.128117117" Mar 17 18:36:11.337262 kubelet[1897]: I0317 18:36:11.337200 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.337182777 podStartE2EDuration="1.337182777s" podCreationTimestamp="2025-03-17 18:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:36:11.328118109 +0000 UTC m=+1.129619414" watchObservedRunningTime="2025-03-17 18:36:11.337182777 +0000 UTC m=+1.138684093" Mar 17 18:36:11.381788 sudo[1930]: pam_unix(sudo:session): session closed for user root Mar 17 18:36:12.285298 kubelet[1897]: E0317 18:36:12.285260 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:12.285786 kubelet[1897]: E0317 18:36:12.285366 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:13.387634 sudo[1300]: pam_unix(sudo:session): session closed for user root Mar 17 18:36:13.388859 sshd[1296]: pam_unix(sshd:session): session closed for user core Mar 17 18:36:13.390960 systemd[1]: sshd@4-10.0.0.22:22-10.0.0.1:56546.service: Deactivated successfully. Mar 17 18:36:13.391704 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:36:13.391819 systemd[1]: session-5.scope: Consumed 4.421s CPU time. Mar 17 18:36:13.392303 systemd-logind[1189]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:36:13.393181 systemd-logind[1189]: Removed session 5. Mar 17 18:36:13.781442 kubelet[1897]: E0317 18:36:13.781381 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:14.170099 kubelet[1897]: E0317 18:36:14.169971 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:15.012454 kubelet[1897]: I0317 18:36:15.012401 1897 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:36:15.012871 env[1207]: time="2025-03-17T18:36:15.012741181Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:36:15.013091 kubelet[1897]: I0317 18:36:15.012907 1897 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:36:15.464450 systemd[1]: Created slice kubepods-besteffort-pod796a0592_c438_4aec_9b40_f50d1e8a5065.slice. Mar 17 18:36:15.480095 systemd[1]: Created slice kubepods-burstable-poddc4f98bc_4197_48ff_a30a_9b2e5a659a23.slice. Mar 17 18:36:15.504552 kubelet[1897]: I0317 18:36:15.504508 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-hubble-tls\") pod \"cilium-s9jgp\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " pod="kube-system/cilium-s9jgp" Mar 17 18:36:15.504552 kubelet[1897]: I0317 18:36:15.504548 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/796a0592-c438-4aec-9b40-f50d1e8a5065-lib-modules\") pod \"kube-proxy-brt7l\" (UID: \"796a0592-c438-4aec-9b40-f50d1e8a5065\") " pod="kube-system/kube-proxy-brt7l" Mar 17 18:36:15.504552 kubelet[1897]: I0317 18:36:15.504564 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-host-proc-sys-net\") pod \"cilium-s9jgp\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " pod="kube-system/cilium-s9jgp" Mar 17 18:36:15.504792 kubelet[1897]: I0317 18:36:15.504580 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-hostproc\") pod \"cilium-s9jgp\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " pod="kube-system/cilium-s9jgp" Mar 17 18:36:15.504792 kubelet[1897]: I0317 18:36:15.504593 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-cilium-run\") pod \"cilium-s9jgp\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " pod="kube-system/cilium-s9jgp" Mar 17 18:36:15.504792 kubelet[1897]: I0317 18:36:15.504608 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvwvc\" (UniqueName: \"kubernetes.io/projected/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-kube-api-access-cvwvc\") pod \"cilium-s9jgp\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " pod="kube-system/cilium-s9jgp" Mar 17 18:36:15.504792 kubelet[1897]: I0317 18:36:15.504665 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/796a0592-c438-4aec-9b40-f50d1e8a5065-kube-proxy\") pod \"kube-proxy-brt7l\" (UID: \"796a0592-c438-4aec-9b40-f50d1e8a5065\") " pod="kube-system/kube-proxy-brt7l" Mar 17 18:36:15.504792 kubelet[1897]: I0317 18:36:15.504711 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2frq\" (UniqueName: \"kubernetes.io/projected/796a0592-c438-4aec-9b40-f50d1e8a5065-kube-api-access-g2frq\") pod \"kube-proxy-brt7l\" (UID: \"796a0592-c438-4aec-9b40-f50d1e8a5065\") " pod="kube-system/kube-proxy-brt7l" Mar 17 18:36:15.504792 kubelet[1897]: I0317 18:36:15.504728 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-lib-modules\") pod \"cilium-s9jgp\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " pod="kube-system/cilium-s9jgp" Mar 17 18:36:15.504947 kubelet[1897]: I0317 18:36:15.504760 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-xtables-lock\") pod \"cilium-s9jgp\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " pod="kube-system/cilium-s9jgp" Mar 17 18:36:15.504947 kubelet[1897]: I0317 18:36:15.504808 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-etc-cni-netd\") pod \"cilium-s9jgp\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " pod="kube-system/cilium-s9jgp" Mar 17 18:36:15.504947 kubelet[1897]: I0317 18:36:15.504829 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-clustermesh-secrets\") pod \"cilium-s9jgp\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " pod="kube-system/cilium-s9jgp" Mar 17 18:36:15.504947 kubelet[1897]: I0317 18:36:15.504848 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-cni-path\") pod \"cilium-s9jgp\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " pod="kube-system/cilium-s9jgp" Mar 17 18:36:15.504947 kubelet[1897]: I0317 18:36:15.504865 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-host-proc-sys-kernel\") pod \"cilium-s9jgp\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " pod="kube-system/cilium-s9jgp" Mar 17 18:36:15.504947 kubelet[1897]: I0317 18:36:15.504897 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-cilium-cgroup\") pod \"cilium-s9jgp\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " pod="kube-system/cilium-s9jgp" Mar 17 18:36:15.505077 kubelet[1897]: I0317 18:36:15.504912 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-cilium-config-path\") pod \"cilium-s9jgp\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " pod="kube-system/cilium-s9jgp" Mar 17 18:36:15.505077 kubelet[1897]: I0317 18:36:15.504928 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-bpf-maps\") pod \"cilium-s9jgp\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " pod="kube-system/cilium-s9jgp" Mar 17 18:36:15.505077 kubelet[1897]: I0317 18:36:15.504944 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/796a0592-c438-4aec-9b40-f50d1e8a5065-xtables-lock\") pod \"kube-proxy-brt7l\" (UID: \"796a0592-c438-4aec-9b40-f50d1e8a5065\") " pod="kube-system/kube-proxy-brt7l" Mar 17 18:36:15.605801 kubelet[1897]: I0317 18:36:15.605761 1897 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 17 18:36:15.776592 kubelet[1897]: E0317 18:36:15.776539 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:15.777343 env[1207]: time="2025-03-17T18:36:15.777287713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-brt7l,Uid:796a0592-c438-4aec-9b40-f50d1e8a5065,Namespace:kube-system,Attempt:0,}" Mar 17 18:36:15.783161 kubelet[1897]: E0317 18:36:15.783113 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:15.783738 env[1207]: time="2025-03-17T18:36:15.783682013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s9jgp,Uid:dc4f98bc-4197-48ff-a30a-9b2e5a659a23,Namespace:kube-system,Attempt:0,}" Mar 17 18:36:15.998590 systemd[1]: Created slice kubepods-besteffort-pod000bc3f1_fa83_4612_a007_5b62bc87360b.slice. Mar 17 18:36:16.012988 kubelet[1897]: I0317 18:36:16.012401 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ph4s\" (UniqueName: \"kubernetes.io/projected/000bc3f1-fa83-4612-a007-5b62bc87360b-kube-api-access-4ph4s\") pod \"cilium-operator-5d85765b45-hx9z4\" (UID: \"000bc3f1-fa83-4612-a007-5b62bc87360b\") " pod="kube-system/cilium-operator-5d85765b45-hx9z4" Mar 17 18:36:16.012988 kubelet[1897]: I0317 18:36:16.012433 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/000bc3f1-fa83-4612-a007-5b62bc87360b-cilium-config-path\") pod \"cilium-operator-5d85765b45-hx9z4\" (UID: \"000bc3f1-fa83-4612-a007-5b62bc87360b\") " pod="kube-system/cilium-operator-5d85765b45-hx9z4" Mar 17 18:36:16.029291 env[1207]: time="2025-03-17T18:36:16.028882734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:36:16.029291 env[1207]: time="2025-03-17T18:36:16.028988558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:36:16.029291 env[1207]: time="2025-03-17T18:36:16.029011824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:36:16.029291 env[1207]: time="2025-03-17T18:36:16.029152602Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/11012e15680db8ff8f96fb821cffcb13d14ac3cdc6aace3988a79061811b7200 pid=1997 runtime=io.containerd.runc.v2 Mar 17 18:36:16.035471 env[1207]: time="2025-03-17T18:36:16.034763171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:36:16.035471 env[1207]: time="2025-03-17T18:36:16.034808901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:36:16.035471 env[1207]: time="2025-03-17T18:36:16.034829270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:36:16.035471 env[1207]: time="2025-03-17T18:36:16.034980874Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5 pid=1999 runtime=io.containerd.runc.v2 Mar 17 18:36:16.056086 systemd[1]: Started cri-containerd-11012e15680db8ff8f96fb821cffcb13d14ac3cdc6aace3988a79061811b7200.scope. Mar 17 18:36:16.062158 systemd[1]: Started cri-containerd-3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5.scope. Mar 17 18:36:16.085694 env[1207]: time="2025-03-17T18:36:16.085647768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-brt7l,Uid:796a0592-c438-4aec-9b40-f50d1e8a5065,Namespace:kube-system,Attempt:0,} returns sandbox id \"11012e15680db8ff8f96fb821cffcb13d14ac3cdc6aace3988a79061811b7200\"" Mar 17 18:36:16.087420 kubelet[1897]: E0317 18:36:16.087395 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:16.089842 env[1207]: time="2025-03-17T18:36:16.089813998Z" level=info msg="CreateContainer within sandbox \"11012e15680db8ff8f96fb821cffcb13d14ac3cdc6aace3988a79061811b7200\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:36:16.093373 env[1207]: time="2025-03-17T18:36:16.093331087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s9jgp,Uid:dc4f98bc-4197-48ff-a30a-9b2e5a659a23,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\"" Mar 17 18:36:16.094529 kubelet[1897]: E0317 18:36:16.094405 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:16.097916 env[1207]: time="2025-03-17T18:36:16.097866513Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:36:16.114620 env[1207]: time="2025-03-17T18:36:16.114575685Z" level=info msg="CreateContainer within sandbox \"11012e15680db8ff8f96fb821cffcb13d14ac3cdc6aace3988a79061811b7200\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2b9ca97a23861867610d491bc629b29c3929dd2b868250319f830989aea6bbce\"" Mar 17 18:36:16.115485 env[1207]: time="2025-03-17T18:36:16.115461072Z" level=info msg="StartContainer for \"2b9ca97a23861867610d491bc629b29c3929dd2b868250319f830989aea6bbce\"" Mar 17 18:36:16.131079 systemd[1]: Started cri-containerd-2b9ca97a23861867610d491bc629b29c3929dd2b868250319f830989aea6bbce.scope. Mar 17 18:36:16.160516 env[1207]: time="2025-03-17T18:36:16.160458426Z" level=info msg="StartContainer for \"2b9ca97a23861867610d491bc629b29c3929dd2b868250319f830989aea6bbce\" returns successfully" Mar 17 18:36:16.293692 kubelet[1897]: E0317 18:36:16.293575 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:16.301886 kubelet[1897]: I0317 18:36:16.301820 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-brt7l" podStartSLOduration=1.301801314 podStartE2EDuration="1.301801314s" podCreationTimestamp="2025-03-17 18:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:36:16.301676794 +0000 UTC m=+6.103178099" watchObservedRunningTime="2025-03-17 18:36:16.301801314 +0000 UTC m=+6.103302619" Mar 17 18:36:16.302312 kubelet[1897]: E0317 18:36:16.302277 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:16.302912 env[1207]: time="2025-03-17T18:36:16.302864638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-hx9z4,Uid:000bc3f1-fa83-4612-a007-5b62bc87360b,Namespace:kube-system,Attempt:0,}" Mar 17 18:36:16.324956 env[1207]: time="2025-03-17T18:36:16.324875698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:36:16.324956 env[1207]: time="2025-03-17T18:36:16.324922090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:36:16.325217 env[1207]: time="2025-03-17T18:36:16.324935823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:36:16.325582 env[1207]: time="2025-03-17T18:36:16.325504540Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec0d73ffaae6b574cc375735e1aac3e8625213f96732cbc8e452a1cdcffe68ed pid=2147 runtime=io.containerd.runc.v2 Mar 17 18:36:16.338173 systemd[1]: Started cri-containerd-ec0d73ffaae6b574cc375735e1aac3e8625213f96732cbc8e452a1cdcffe68ed.scope. Mar 17 18:36:16.371310 env[1207]: time="2025-03-17T18:36:16.371263122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-hx9z4,Uid:000bc3f1-fa83-4612-a007-5b62bc87360b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec0d73ffaae6b574cc375735e1aac3e8625213f96732cbc8e452a1cdcffe68ed\"" Mar 17 18:36:16.373259 kubelet[1897]: E0317 18:36:16.372213 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:17.086151 update_engine[1192]: I0317 18:36:17.086061 1192 update_attempter.cc:509] Updating boot flags... Mar 17 18:36:20.870560 kubelet[1897]: E0317 18:36:20.870511 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:23.327988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1751868167.mount: Deactivated successfully. Mar 17 18:36:23.786238 kubelet[1897]: E0317 18:36:23.786200 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:24.174892 kubelet[1897]: E0317 18:36:24.174778 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:24.496547 kubelet[1897]: E0317 18:36:24.496418 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:29.572895 env[1207]: time="2025-03-17T18:36:29.572811305Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:29.731258 env[1207]: time="2025-03-17T18:36:29.731186959Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:29.778164 env[1207]: time="2025-03-17T18:36:29.778130460Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:29.778683 env[1207]: time="2025-03-17T18:36:29.778647610Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 18:36:29.779663 env[1207]: time="2025-03-17T18:36:29.779627745Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:36:29.780597 env[1207]: time="2025-03-17T18:36:29.780566268Z" level=info msg="CreateContainer within sandbox \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:36:30.802897 env[1207]: time="2025-03-17T18:36:30.802820341Z" level=info msg="CreateContainer within sandbox \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e\"" Mar 17 18:36:30.803557 env[1207]: time="2025-03-17T18:36:30.803508916Z" level=info msg="StartContainer for \"6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e\"" Mar 17 18:36:30.819554 systemd[1]: run-containerd-runc-k8s.io-6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e-runc.o3L6ya.mount: Deactivated successfully. Mar 17 18:36:30.822047 systemd[1]: Started cri-containerd-6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e.scope. Mar 17 18:36:30.858926 systemd[1]: cri-containerd-6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e.scope: Deactivated successfully. Mar 17 18:36:31.582075 env[1207]: time="2025-03-17T18:36:31.581967467Z" level=info msg="StartContainer for \"6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e\" returns successfully" Mar 17 18:36:31.598855 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e-rootfs.mount: Deactivated successfully. Mar 17 18:36:31.654115 kubelet[1897]: E0317 18:36:31.653799 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:31.661136 env[1207]: time="2025-03-17T18:36:31.661066870Z" level=info msg="shim disconnected" id=6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e Mar 17 18:36:31.661136 env[1207]: time="2025-03-17T18:36:31.661111956Z" level=warning msg="cleaning up after shim disconnected" id=6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e namespace=k8s.io Mar 17 18:36:31.661136 env[1207]: time="2025-03-17T18:36:31.661120414Z" level=info msg="cleaning up dead shim" Mar 17 18:36:31.670457 env[1207]: time="2025-03-17T18:36:31.670408559Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:36:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2337 runtime=io.containerd.runc.v2\n" Mar 17 18:36:32.656715 kubelet[1897]: E0317 18:36:32.656680 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:32.658426 env[1207]: time="2025-03-17T18:36:32.658377097Z" level=info msg="CreateContainer within sandbox \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:36:33.079971 env[1207]: time="2025-03-17T18:36:33.079892081Z" level=info msg="CreateContainer within sandbox \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2d3b50c5af020383ba63509e96e066e90455b9e97be9e76c567f5204dbf032f1\"" Mar 17 18:36:33.080633 env[1207]: time="2025-03-17T18:36:33.080584222Z" level=info msg="StartContainer for \"2d3b50c5af020383ba63509e96e066e90455b9e97be9e76c567f5204dbf032f1\"" Mar 17 18:36:33.100085 systemd[1]: Started cri-containerd-2d3b50c5af020383ba63509e96e066e90455b9e97be9e76c567f5204dbf032f1.scope. Mar 17 18:36:33.138433 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:36:33.138703 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:36:33.138881 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:36:33.140368 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:36:33.141636 env[1207]: time="2025-03-17T18:36:33.141587321Z" level=info msg="StartContainer for \"2d3b50c5af020383ba63509e96e066e90455b9e97be9e76c567f5204dbf032f1\" returns successfully" Mar 17 18:36:33.146666 systemd[1]: cri-containerd-2d3b50c5af020383ba63509e96e066e90455b9e97be9e76c567f5204dbf032f1.scope: Deactivated successfully. Mar 17 18:36:33.150222 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:36:33.180514 env[1207]: time="2025-03-17T18:36:33.180456991Z" level=info msg="shim disconnected" id=2d3b50c5af020383ba63509e96e066e90455b9e97be9e76c567f5204dbf032f1 Mar 17 18:36:33.180768 env[1207]: time="2025-03-17T18:36:33.180742208Z" level=warning msg="cleaning up after shim disconnected" id=2d3b50c5af020383ba63509e96e066e90455b9e97be9e76c567f5204dbf032f1 namespace=k8s.io Mar 17 18:36:33.180833 env[1207]: time="2025-03-17T18:36:33.180762481Z" level=info msg="cleaning up dead shim" Mar 17 18:36:33.187230 env[1207]: time="2025-03-17T18:36:33.187179253Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:36:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2400 runtime=io.containerd.runc.v2\n" Mar 17 18:36:33.422097 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d3b50c5af020383ba63509e96e066e90455b9e97be9e76c567f5204dbf032f1-rootfs.mount: Deactivated successfully. Mar 17 18:36:33.659486 kubelet[1897]: E0317 18:36:33.659450 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:33.661858 env[1207]: time="2025-03-17T18:36:33.661247188Z" level=info msg="CreateContainer within sandbox \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:36:34.076197 env[1207]: time="2025-03-17T18:36:34.076129893Z" level=info msg="CreateContainer within sandbox \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02\"" Mar 17 18:36:34.076863 env[1207]: time="2025-03-17T18:36:34.076822620Z" level=info msg="StartContainer for \"45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02\"" Mar 17 18:36:34.095496 systemd[1]: Started cri-containerd-45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02.scope. Mar 17 18:36:34.095854 env[1207]: time="2025-03-17T18:36:34.095528498Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:34.103413 env[1207]: time="2025-03-17T18:36:34.102292060Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:34.155693 systemd[1]: cri-containerd-45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02.scope: Deactivated successfully. Mar 17 18:36:34.341051 env[1207]: time="2025-03-17T18:36:34.340660281Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:34.341249 env[1207]: time="2025-03-17T18:36:34.341178677Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 18:36:34.341998 env[1207]: time="2025-03-17T18:36:34.341862697Z" level=info msg="StartContainer for \"45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02\" returns successfully" Mar 17 18:36:34.343929 env[1207]: time="2025-03-17T18:36:34.343880890Z" level=info msg="CreateContainer within sandbox \"ec0d73ffaae6b574cc375735e1aac3e8625213f96732cbc8e452a1cdcffe68ed\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:36:34.421842 systemd[1]: run-containerd-runc-k8s.io-45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02-runc.hfD1Ej.mount: Deactivated successfully. Mar 17 18:36:34.421933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02-rootfs.mount: Deactivated successfully. Mar 17 18:36:34.432805 env[1207]: time="2025-03-17T18:36:34.432750335Z" level=info msg="shim disconnected" id=45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02 Mar 17 18:36:34.432805 env[1207]: time="2025-03-17T18:36:34.432804059Z" level=warning msg="cleaning up after shim disconnected" id=45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02 namespace=k8s.io Mar 17 18:36:34.432907 env[1207]: time="2025-03-17T18:36:34.432812858Z" level=info msg="cleaning up dead shim" Mar 17 18:36:34.438773 env[1207]: time="2025-03-17T18:36:34.438736741Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:36:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2455 runtime=io.containerd.runc.v2\n" Mar 17 18:36:34.626236 env[1207]: time="2025-03-17T18:36:34.626139796Z" level=info msg="CreateContainer within sandbox \"ec0d73ffaae6b574cc375735e1aac3e8625213f96732cbc8e452a1cdcffe68ed\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83\"" Mar 17 18:36:34.626677 env[1207]: time="2025-03-17T18:36:34.626625223Z" level=info msg="StartContainer for \"30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83\"" Mar 17 18:36:34.640660 systemd[1]: Started cri-containerd-30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83.scope. Mar 17 18:36:34.764213 env[1207]: time="2025-03-17T18:36:34.764155977Z" level=info msg="StartContainer for \"30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83\" returns successfully" Mar 17 18:36:34.768419 kubelet[1897]: E0317 18:36:34.768127 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:34.770325 env[1207]: time="2025-03-17T18:36:34.770287239Z" level=info msg="CreateContainer within sandbox \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:36:34.771968 kubelet[1897]: E0317 18:36:34.771926 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:34.922491 env[1207]: time="2025-03-17T18:36:34.922337817Z" level=info msg="CreateContainer within sandbox \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eeacf9d83d4b7e2ba540ec2cb8a65894f0c8f0bc4e1e5c68f66a174187554711\"" Mar 17 18:36:34.922839 env[1207]: time="2025-03-17T18:36:34.922750530Z" level=info msg="StartContainer for \"eeacf9d83d4b7e2ba540ec2cb8a65894f0c8f0bc4e1e5c68f66a174187554711\"" Mar 17 18:36:34.971514 systemd[1]: Started cri-containerd-eeacf9d83d4b7e2ba540ec2cb8a65894f0c8f0bc4e1e5c68f66a174187554711.scope. Mar 17 18:36:35.003023 systemd[1]: cri-containerd-eeacf9d83d4b7e2ba540ec2cb8a65894f0c8f0bc4e1e5c68f66a174187554711.scope: Deactivated successfully. Mar 17 18:36:35.007944 env[1207]: time="2025-03-17T18:36:35.007895779Z" level=info msg="StartContainer for \"eeacf9d83d4b7e2ba540ec2cb8a65894f0c8f0bc4e1e5c68f66a174187554711\" returns successfully" Mar 17 18:36:35.029799 env[1207]: time="2025-03-17T18:36:35.029749566Z" level=info msg="shim disconnected" id=eeacf9d83d4b7e2ba540ec2cb8a65894f0c8f0bc4e1e5c68f66a174187554711 Mar 17 18:36:35.029799 env[1207]: time="2025-03-17T18:36:35.029795874Z" level=warning msg="cleaning up after shim disconnected" id=eeacf9d83d4b7e2ba540ec2cb8a65894f0c8f0bc4e1e5c68f66a174187554711 namespace=k8s.io Mar 17 18:36:35.029799 env[1207]: time="2025-03-17T18:36:35.029804212Z" level=info msg="cleaning up dead shim" Mar 17 18:36:35.036224 env[1207]: time="2025-03-17T18:36:35.036184008Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:36:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2549 runtime=io.containerd.runc.v2\n" Mar 17 18:36:35.421744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3614276264.mount: Deactivated successfully. Mar 17 18:36:35.776089 kubelet[1897]: E0317 18:36:35.776059 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:35.776525 kubelet[1897]: E0317 18:36:35.776106 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:35.777635 env[1207]: time="2025-03-17T18:36:35.777601527Z" level=info msg="CreateContainer within sandbox \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:36:36.014331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2944942285.mount: Deactivated successfully. Mar 17 18:36:36.240829 kubelet[1897]: I0317 18:36:36.240661 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-hx9z4" podStartSLOduration=3.270930431 podStartE2EDuration="21.240644643s" podCreationTimestamp="2025-03-17 18:36:15 +0000 UTC" firstStartedPulling="2025-03-17 18:36:16.372834286 +0000 UTC m=+6.174335581" lastFinishedPulling="2025-03-17 18:36:34.342548488 +0000 UTC m=+24.144049793" observedRunningTime="2025-03-17 18:36:34.830977307 +0000 UTC m=+24.632478612" watchObservedRunningTime="2025-03-17 18:36:36.240644643 +0000 UTC m=+26.042145948" Mar 17 18:36:36.469447 env[1207]: time="2025-03-17T18:36:36.469364705Z" level=info msg="CreateContainer within sandbox \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769\"" Mar 17 18:36:36.469791 env[1207]: time="2025-03-17T18:36:36.469767012Z" level=info msg="StartContainer for \"b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769\"" Mar 17 18:36:36.488128 systemd[1]: Started cri-containerd-b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769.scope. Mar 17 18:36:36.534323 env[1207]: time="2025-03-17T18:36:36.533582334Z" level=info msg="StartContainer for \"b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769\" returns successfully" Mar 17 18:36:36.564364 systemd[1]: run-containerd-runc-k8s.io-b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769-runc.c0fLhJ.mount: Deactivated successfully. Mar 17 18:36:36.650365 kubelet[1897]: I0317 18:36:36.650322 1897 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 18:36:36.684618 systemd[1]: Created slice kubepods-burstable-pod83f33465_660e_497f_9d04_a088ad0a4771.slice. Mar 17 18:36:36.690026 systemd[1]: Created slice kubepods-burstable-podb3a73e71_0212_4061_93e8_aafb3c90e375.slice. Mar 17 18:36:36.697341 kubelet[1897]: I0317 18:36:36.697288 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83f33465-660e-497f-9d04-a088ad0a4771-config-volume\") pod \"coredns-6f6b679f8f-cq2mm\" (UID: \"83f33465-660e-497f-9d04-a088ad0a4771\") " pod="kube-system/coredns-6f6b679f8f-cq2mm" Mar 17 18:36:36.697341 kubelet[1897]: I0317 18:36:36.697335 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3a73e71-0212-4061-93e8-aafb3c90e375-config-volume\") pod \"coredns-6f6b679f8f-9d44j\" (UID: \"b3a73e71-0212-4061-93e8-aafb3c90e375\") " pod="kube-system/coredns-6f6b679f8f-9d44j" Mar 17 18:36:36.697593 kubelet[1897]: I0317 18:36:36.697358 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjr9n\" (UniqueName: \"kubernetes.io/projected/b3a73e71-0212-4061-93e8-aafb3c90e375-kube-api-access-cjr9n\") pod \"coredns-6f6b679f8f-9d44j\" (UID: \"b3a73e71-0212-4061-93e8-aafb3c90e375\") " pod="kube-system/coredns-6f6b679f8f-9d44j" Mar 17 18:36:36.697593 kubelet[1897]: I0317 18:36:36.697381 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z7cd\" (UniqueName: \"kubernetes.io/projected/83f33465-660e-497f-9d04-a088ad0a4771-kube-api-access-4z7cd\") pod \"coredns-6f6b679f8f-cq2mm\" (UID: \"83f33465-660e-497f-9d04-a088ad0a4771\") " pod="kube-system/coredns-6f6b679f8f-cq2mm" Mar 17 18:36:36.779677 kubelet[1897]: E0317 18:36:36.779647 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:36.795981 kubelet[1897]: I0317 18:36:36.795805 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s9jgp" podStartSLOduration=8.112596083 podStartE2EDuration="21.795785891s" podCreationTimestamp="2025-03-17 18:36:15 +0000 UTC" firstStartedPulling="2025-03-17 18:36:16.096288462 +0000 UTC m=+5.897789767" lastFinishedPulling="2025-03-17 18:36:29.77947827 +0000 UTC m=+19.580979575" observedRunningTime="2025-03-17 18:36:36.795588135 +0000 UTC m=+26.597089460" watchObservedRunningTime="2025-03-17 18:36:36.795785891 +0000 UTC m=+26.597287196" Mar 17 18:36:36.988198 kubelet[1897]: E0317 18:36:36.988152 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:36.988926 env[1207]: time="2025-03-17T18:36:36.988862082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cq2mm,Uid:83f33465-660e-497f-9d04-a088ad0a4771,Namespace:kube-system,Attempt:0,}" Mar 17 18:36:36.994143 kubelet[1897]: E0317 18:36:36.994113 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:36.994510 env[1207]: time="2025-03-17T18:36:36.994467863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9d44j,Uid:b3a73e71-0212-4061-93e8-aafb3c90e375,Namespace:kube-system,Attempt:0,}" Mar 17 18:36:37.780808 kubelet[1897]: E0317 18:36:37.780771 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:38.683955 systemd-networkd[1021]: cilium_host: Link UP Mar 17 18:36:38.684470 systemd-networkd[1021]: cilium_net: Link UP Mar 17 18:36:38.685875 systemd-networkd[1021]: cilium_net: Gained carrier Mar 17 18:36:38.687017 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 18:36:38.687091 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:36:38.687154 systemd-networkd[1021]: cilium_host: Gained carrier Mar 17 18:36:38.687290 systemd-networkd[1021]: cilium_host: Gained IPv6LL Mar 17 18:36:38.770006 systemd-networkd[1021]: cilium_vxlan: Link UP Mar 17 18:36:38.770015 systemd-networkd[1021]: cilium_vxlan: Gained carrier Mar 17 18:36:38.780507 systemd-networkd[1021]: cilium_net: Gained IPv6LL Mar 17 18:36:38.805781 kubelet[1897]: E0317 18:36:38.805736 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:38.990421 kernel: NET: Registered PF_ALG protocol family Mar 17 18:36:39.544954 systemd-networkd[1021]: lxc_health: Link UP Mar 17 18:36:39.570564 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:36:39.570258 systemd-networkd[1021]: lxc_health: Gained carrier Mar 17 18:36:39.796473 kubelet[1897]: E0317 18:36:39.796333 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:39.827041 systemd-networkd[1021]: lxcc70d64f43fa6: Link UP Mar 17 18:36:39.837496 kernel: eth0: renamed from tmpff308 Mar 17 18:36:39.844723 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:36:39.844797 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc70d64f43fa6: link becomes ready Mar 17 18:36:39.845026 systemd-networkd[1021]: lxcc70d64f43fa6: Gained carrier Mar 17 18:36:39.996530 systemd-networkd[1021]: lxce2817cc9b199: Link UP Mar 17 18:36:40.008459 kernel: eth0: renamed from tmpf8f91 Mar 17 18:36:40.014125 systemd-networkd[1021]: lxce2817cc9b199: Gained carrier Mar 17 18:36:40.014670 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce2817cc9b199: link becomes ready Mar 17 18:36:40.158533 systemd[1]: Started sshd@5-10.0.0.22:22-10.0.0.1:33688.service. Mar 17 18:36:40.211275 sshd[3110]: Accepted publickey for core from 10.0.0.1 port 33688 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:36:40.213296 sshd[3110]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:36:40.218266 systemd[1]: Started session-6.scope. Mar 17 18:36:40.219113 systemd-logind[1189]: New session 6 of user core. Mar 17 18:36:40.435066 sshd[3110]: pam_unix(sshd:session): session closed for user core Mar 17 18:36:40.437234 systemd[1]: sshd@5-10.0.0.22:22-10.0.0.1:33688.service: Deactivated successfully. Mar 17 18:36:40.437953 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:36:40.438443 systemd-logind[1189]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:36:40.439147 systemd-logind[1189]: Removed session 6. Mar 17 18:36:40.764606 systemd-networkd[1021]: cilium_vxlan: Gained IPv6LL Mar 17 18:36:40.797965 kubelet[1897]: E0317 18:36:40.797929 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:41.084538 systemd-networkd[1021]: lxc_health: Gained IPv6LL Mar 17 18:36:41.340566 systemd-networkd[1021]: lxcc70d64f43fa6: Gained IPv6LL Mar 17 18:36:41.532557 systemd-networkd[1021]: lxce2817cc9b199: Gained IPv6LL Mar 17 18:36:43.327332 env[1207]: time="2025-03-17T18:36:43.327250646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:36:43.327332 env[1207]: time="2025-03-17T18:36:43.327303936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:36:43.327332 env[1207]: time="2025-03-17T18:36:43.327316742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:36:43.327836 env[1207]: time="2025-03-17T18:36:43.327581628Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ff3083568f7cd1be3645dab550f59bdc9306a3bb5404889df3ea1309feee8e8b pid=3156 runtime=io.containerd.runc.v2 Mar 17 18:36:43.327836 env[1207]: time="2025-03-17T18:36:43.327733872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:36:43.327836 env[1207]: time="2025-03-17T18:36:43.327781610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:36:43.327836 env[1207]: time="2025-03-17T18:36:43.327791520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:36:43.328063 env[1207]: time="2025-03-17T18:36:43.327998798Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8f9107a7fd360f6720d67a81b961664f26e2cb36a0ce9b4c60955bc7c33d763 pid=3165 runtime=io.containerd.runc.v2 Mar 17 18:36:43.350151 systemd[1]: Started cri-containerd-f8f9107a7fd360f6720d67a81b961664f26e2cb36a0ce9b4c60955bc7c33d763.scope. Mar 17 18:36:43.354108 systemd[1]: Started cri-containerd-ff3083568f7cd1be3645dab550f59bdc9306a3bb5404889df3ea1309feee8e8b.scope. Mar 17 18:36:43.363335 systemd-resolved[1138]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:36:43.365126 systemd-resolved[1138]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:36:43.387467 env[1207]: time="2025-03-17T18:36:43.387426001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cq2mm,Uid:83f33465-660e-497f-9d04-a088ad0a4771,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff3083568f7cd1be3645dab550f59bdc9306a3bb5404889df3ea1309feee8e8b\"" Mar 17 18:36:43.388208 kubelet[1897]: E0317 18:36:43.388170 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:43.391831 env[1207]: time="2025-03-17T18:36:43.391796145Z" level=info msg="CreateContainer within sandbox \"ff3083568f7cd1be3645dab550f59bdc9306a3bb5404889df3ea1309feee8e8b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:36:43.395011 env[1207]: time="2025-03-17T18:36:43.394972729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9d44j,Uid:b3a73e71-0212-4061-93e8-aafb3c90e375,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8f9107a7fd360f6720d67a81b961664f26e2cb36a0ce9b4c60955bc7c33d763\"" Mar 17 18:36:43.396356 kubelet[1897]: E0317 18:36:43.395961 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:43.398096 env[1207]: time="2025-03-17T18:36:43.397677942Z" level=info msg="CreateContainer within sandbox \"f8f9107a7fd360f6720d67a81b961664f26e2cb36a0ce9b4c60955bc7c33d763\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:36:43.415831 env[1207]: time="2025-03-17T18:36:43.415773062Z" level=info msg="CreateContainer within sandbox \"ff3083568f7cd1be3645dab550f59bdc9306a3bb5404889df3ea1309feee8e8b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"05a23279394b9086e6c604ff89c31acf76effedf90d99e657e42c7bfb53d6d3a\"" Mar 17 18:36:43.416563 env[1207]: time="2025-03-17T18:36:43.416513448Z" level=info msg="StartContainer for \"05a23279394b9086e6c604ff89c31acf76effedf90d99e657e42c7bfb53d6d3a\"" Mar 17 18:36:43.419029 env[1207]: time="2025-03-17T18:36:43.418986824Z" level=info msg="CreateContainer within sandbox \"f8f9107a7fd360f6720d67a81b961664f26e2cb36a0ce9b4c60955bc7c33d763\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"460f1d8ac4e5ee542ff1baddef60e24d3eb0699418a3249f1324d061dca5fac6\"" Mar 17 18:36:43.419396 env[1207]: time="2025-03-17T18:36:43.419362256Z" level=info msg="StartContainer for \"460f1d8ac4e5ee542ff1baddef60e24d3eb0699418a3249f1324d061dca5fac6\"" Mar 17 18:36:43.439768 systemd[1]: Started cri-containerd-05a23279394b9086e6c604ff89c31acf76effedf90d99e657e42c7bfb53d6d3a.scope. Mar 17 18:36:43.440466 systemd[1]: Started cri-containerd-460f1d8ac4e5ee542ff1baddef60e24d3eb0699418a3249f1324d061dca5fac6.scope. Mar 17 18:36:43.526696 env[1207]: time="2025-03-17T18:36:43.526633870Z" level=info msg="StartContainer for \"05a23279394b9086e6c604ff89c31acf76effedf90d99e657e42c7bfb53d6d3a\" returns successfully" Mar 17 18:36:43.577226 env[1207]: time="2025-03-17T18:36:43.577163623Z" level=info msg="StartContainer for \"460f1d8ac4e5ee542ff1baddef60e24d3eb0699418a3249f1324d061dca5fac6\" returns successfully" Mar 17 18:36:43.803977 kubelet[1897]: E0317 18:36:43.803934 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:43.805303 kubelet[1897]: E0317 18:36:43.805275 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:43.931046 kubelet[1897]: I0317 18:36:43.928563 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-cq2mm" podStartSLOduration=28.92854163 podStartE2EDuration="28.92854163s" podCreationTimestamp="2025-03-17 18:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:36:43.927984442 +0000 UTC m=+33.729485747" watchObservedRunningTime="2025-03-17 18:36:43.92854163 +0000 UTC m=+33.730042955" Mar 17 18:36:44.807281 kubelet[1897]: E0317 18:36:44.807236 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:44.807670 kubelet[1897]: E0317 18:36:44.807417 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:45.440758 systemd[1]: Started sshd@6-10.0.0.22:22-10.0.0.1:33698.service. Mar 17 18:36:45.472350 sshd[3312]: Accepted publickey for core from 10.0.0.1 port 33698 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:36:45.473422 sshd[3312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:36:45.477045 systemd-logind[1189]: New session 7 of user core. Mar 17 18:36:45.477912 systemd[1]: Started session-7.scope. Mar 17 18:36:45.611202 sshd[3312]: pam_unix(sshd:session): session closed for user core Mar 17 18:36:45.613950 systemd[1]: sshd@6-10.0.0.22:22-10.0.0.1:33698.service: Deactivated successfully. Mar 17 18:36:45.614623 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:36:45.615215 systemd-logind[1189]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:36:45.615969 systemd-logind[1189]: Removed session 7. Mar 17 18:36:45.809132 kubelet[1897]: E0317 18:36:45.809101 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:45.809132 kubelet[1897]: E0317 18:36:45.809131 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:50.615936 systemd[1]: Started sshd@7-10.0.0.22:22-10.0.0.1:55446.service. Mar 17 18:36:50.644721 sshd[3329]: Accepted publickey for core from 10.0.0.1 port 55446 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:36:50.645730 sshd[3329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:36:50.648834 systemd-logind[1189]: New session 8 of user core. Mar 17 18:36:50.649759 systemd[1]: Started session-8.scope. Mar 17 18:36:50.768515 sshd[3329]: pam_unix(sshd:session): session closed for user core Mar 17 18:36:50.771178 systemd[1]: sshd@7-10.0.0.22:22-10.0.0.1:55446.service: Deactivated successfully. Mar 17 18:36:50.772011 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:36:50.773016 systemd-logind[1189]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:36:50.773815 systemd-logind[1189]: Removed session 8. Mar 17 18:36:55.774103 systemd[1]: Started sshd@8-10.0.0.22:22-10.0.0.1:48434.service. Mar 17 18:36:55.851546 sshd[3343]: Accepted publickey for core from 10.0.0.1 port 48434 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:36:55.852583 sshd[3343]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:36:55.855825 systemd-logind[1189]: New session 9 of user core. Mar 17 18:36:55.856581 systemd[1]: Started session-9.scope. Mar 17 18:36:55.958382 sshd[3343]: pam_unix(sshd:session): session closed for user core Mar 17 18:36:55.960315 systemd[1]: sshd@8-10.0.0.22:22-10.0.0.1:48434.service: Deactivated successfully. Mar 17 18:36:55.960969 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:36:55.961515 systemd-logind[1189]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:36:55.962099 systemd-logind[1189]: Removed session 9. Mar 17 18:37:00.962563 systemd[1]: Started sshd@9-10.0.0.22:22-10.0.0.1:48440.service. Mar 17 18:37:00.990634 sshd[3358]: Accepted publickey for core from 10.0.0.1 port 48440 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:00.991660 sshd[3358]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:00.995199 systemd-logind[1189]: New session 10 of user core. Mar 17 18:37:00.996027 systemd[1]: Started session-10.scope. Mar 17 18:37:01.110165 sshd[3358]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:01.112660 systemd[1]: sshd@9-10.0.0.22:22-10.0.0.1:48440.service: Deactivated successfully. Mar 17 18:37:01.113546 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:37:01.114581 systemd-logind[1189]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:37:01.115334 systemd-logind[1189]: Removed session 10. Mar 17 18:37:06.115473 systemd[1]: Started sshd@10-10.0.0.22:22-10.0.0.1:50256.service. Mar 17 18:37:06.170767 sshd[3373]: Accepted publickey for core from 10.0.0.1 port 50256 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:06.172324 sshd[3373]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:06.180567 systemd-logind[1189]: New session 11 of user core. Mar 17 18:37:06.181635 systemd[1]: Started session-11.scope. Mar 17 18:37:06.562376 sshd[3373]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:06.565002 systemd[1]: sshd@10-10.0.0.22:22-10.0.0.1:50256.service: Deactivated successfully. Mar 17 18:37:06.565491 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:37:06.566016 systemd-logind[1189]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:37:06.566885 systemd[1]: Started sshd@11-10.0.0.22:22-10.0.0.1:50270.service. Mar 17 18:37:06.567738 systemd-logind[1189]: Removed session 11. Mar 17 18:37:06.596100 sshd[3387]: Accepted publickey for core from 10.0.0.1 port 50270 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:06.597301 sshd[3387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:06.601148 systemd-logind[1189]: New session 12 of user core. Mar 17 18:37:06.602080 systemd[1]: Started session-12.scope. Mar 17 18:37:06.972098 sshd[3387]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:06.975159 systemd[1]: sshd@11-10.0.0.22:22-10.0.0.1:50270.service: Deactivated successfully. Mar 17 18:37:06.975673 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:37:06.976202 systemd-logind[1189]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:37:06.977268 systemd[1]: Started sshd@12-10.0.0.22:22-10.0.0.1:50278.service. Mar 17 18:37:06.978112 systemd-logind[1189]: Removed session 12. Mar 17 18:37:07.012554 sshd[3399]: Accepted publickey for core from 10.0.0.1 port 50278 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:07.014040 sshd[3399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:07.018305 systemd-logind[1189]: New session 13 of user core. Mar 17 18:37:07.019185 systemd[1]: Started session-13.scope. Mar 17 18:37:07.136324 sshd[3399]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:07.138821 systemd[1]: sshd@12-10.0.0.22:22-10.0.0.1:50278.service: Deactivated successfully. Mar 17 18:37:07.139560 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:37:07.140100 systemd-logind[1189]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:37:07.141055 systemd-logind[1189]: Removed session 13. Mar 17 18:37:12.141188 systemd[1]: Started sshd@13-10.0.0.22:22-10.0.0.1:50282.service. Mar 17 18:37:12.171765 sshd[3414]: Accepted publickey for core from 10.0.0.1 port 50282 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:12.172980 sshd[3414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:12.176778 systemd-logind[1189]: New session 14 of user core. Mar 17 18:37:12.176994 systemd[1]: Started session-14.scope. Mar 17 18:37:12.276083 sshd[3414]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:12.278015 systemd[1]: sshd@13-10.0.0.22:22-10.0.0.1:50282.service: Deactivated successfully. Mar 17 18:37:12.278655 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:37:12.279273 systemd-logind[1189]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:37:12.279912 systemd-logind[1189]: Removed session 14. Mar 17 18:37:17.280987 systemd[1]: Started sshd@14-10.0.0.22:22-10.0.0.1:58410.service. Mar 17 18:37:17.310293 sshd[3430]: Accepted publickey for core from 10.0.0.1 port 58410 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:17.311667 sshd[3430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:17.315867 systemd-logind[1189]: New session 15 of user core. Mar 17 18:37:17.316719 systemd[1]: Started session-15.scope. Mar 17 18:37:17.472530 sshd[3430]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:17.475376 systemd[1]: sshd@14-10.0.0.22:22-10.0.0.1:58410.service: Deactivated successfully. Mar 17 18:37:17.476303 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:37:17.477168 systemd-logind[1189]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:37:17.478242 systemd-logind[1189]: Removed session 15. Mar 17 18:37:22.476025 systemd[1]: Started sshd@15-10.0.0.22:22-10.0.0.1:58424.service. Mar 17 18:37:22.506636 sshd[3443]: Accepted publickey for core from 10.0.0.1 port 58424 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:22.507863 sshd[3443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:22.511090 systemd-logind[1189]: New session 16 of user core. Mar 17 18:37:22.511823 systemd[1]: Started session-16.scope. Mar 17 18:37:22.611591 sshd[3443]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:22.613610 systemd[1]: sshd@15-10.0.0.22:22-10.0.0.1:58424.service: Deactivated successfully. Mar 17 18:37:22.614376 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:37:22.614875 systemd-logind[1189]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:37:22.615597 systemd-logind[1189]: Removed session 16. Mar 17 18:37:26.271430 kubelet[1897]: E0317 18:37:26.271361 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:27.615981 systemd[1]: Started sshd@16-10.0.0.22:22-10.0.0.1:41434.service. Mar 17 18:37:27.646989 sshd[3456]: Accepted publickey for core from 10.0.0.1 port 41434 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:27.648120 sshd[3456]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:27.652135 systemd-logind[1189]: New session 17 of user core. Mar 17 18:37:27.653446 systemd[1]: Started session-17.scope. Mar 17 18:37:27.763851 sshd[3456]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:27.766488 systemd[1]: sshd@16-10.0.0.22:22-10.0.0.1:41434.service: Deactivated successfully. Mar 17 18:37:27.766977 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:37:27.767576 systemd-logind[1189]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:37:27.768434 systemd[1]: Started sshd@17-10.0.0.22:22-10.0.0.1:41444.service. Mar 17 18:37:27.769360 systemd-logind[1189]: Removed session 17. Mar 17 18:37:27.799158 sshd[3469]: Accepted publickey for core from 10.0.0.1 port 41444 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:27.800580 sshd[3469]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:27.804342 systemd-logind[1189]: New session 18 of user core. Mar 17 18:37:27.805170 systemd[1]: Started session-18.scope. Mar 17 18:37:28.326481 sshd[3469]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:28.329891 systemd[1]: Started sshd@18-10.0.0.22:22-10.0.0.1:41452.service. Mar 17 18:37:28.330322 systemd[1]: sshd@17-10.0.0.22:22-10.0.0.1:41444.service: Deactivated successfully. Mar 17 18:37:28.331016 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:37:28.331592 systemd-logind[1189]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:37:28.332354 systemd-logind[1189]: Removed session 18. Mar 17 18:37:28.364044 sshd[3480]: Accepted publickey for core from 10.0.0.1 port 41452 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:28.365408 sshd[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:28.369239 systemd-logind[1189]: New session 19 of user core. Mar 17 18:37:28.370219 systemd[1]: Started session-19.scope. Mar 17 18:37:30.413050 sshd[3480]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:30.424510 systemd[1]: sshd@18-10.0.0.22:22-10.0.0.1:41452.service: Deactivated successfully. Mar 17 18:37:30.427770 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:37:30.434494 systemd-logind[1189]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:37:30.438599 systemd[1]: Started sshd@19-10.0.0.22:22-10.0.0.1:41462.service. Mar 17 18:37:30.441792 systemd-logind[1189]: Removed session 19. Mar 17 18:37:30.490285 sshd[3515]: Accepted publickey for core from 10.0.0.1 port 41462 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:30.496735 sshd[3515]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:30.523262 systemd-logind[1189]: New session 20 of user core. Mar 17 18:37:30.524280 systemd[1]: Started session-20.scope. Mar 17 18:37:31.029474 sshd[3515]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:31.035057 systemd[1]: sshd@19-10.0.0.22:22-10.0.0.1:41462.service: Deactivated successfully. Mar 17 18:37:31.035948 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:37:31.038283 systemd[1]: Started sshd@20-10.0.0.22:22-10.0.0.1:41470.service. Mar 17 18:37:31.039335 systemd-logind[1189]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:37:31.041633 systemd-logind[1189]: Removed session 20. Mar 17 18:37:31.083133 sshd[3527]: Accepted publickey for core from 10.0.0.1 port 41470 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:31.085492 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:31.097552 systemd-logind[1189]: New session 21 of user core. Mar 17 18:37:31.098309 systemd[1]: Started session-21.scope. Mar 17 18:37:31.288981 sshd[3527]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:31.293425 systemd[1]: sshd@20-10.0.0.22:22-10.0.0.1:41470.service: Deactivated successfully. Mar 17 18:37:31.294371 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:37:31.298934 systemd-logind[1189]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:37:31.301445 systemd-logind[1189]: Removed session 21. Mar 17 18:37:36.302879 systemd[1]: Started sshd@21-10.0.0.22:22-10.0.0.1:58838.service. Mar 17 18:37:36.367747 sshd[3541]: Accepted publickey for core from 10.0.0.1 port 58838 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:36.371704 sshd[3541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:36.389721 systemd-logind[1189]: New session 22 of user core. Mar 17 18:37:36.393171 systemd[1]: Started session-22.scope. Mar 17 18:37:36.569987 sshd[3541]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:36.573178 systemd[1]: sshd@21-10.0.0.22:22-10.0.0.1:58838.service: Deactivated successfully. Mar 17 18:37:36.574151 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:37:36.574861 systemd-logind[1189]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:37:36.575853 systemd-logind[1189]: Removed session 22. Mar 17 18:37:38.271423 kubelet[1897]: E0317 18:37:38.271331 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:41.575056 systemd[1]: Started sshd@22-10.0.0.22:22-10.0.0.1:58850.service. Mar 17 18:37:41.606651 sshd[3557]: Accepted publickey for core from 10.0.0.1 port 58850 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:41.607955 sshd[3557]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:41.612198 systemd-logind[1189]: New session 23 of user core. Mar 17 18:37:41.613033 systemd[1]: Started session-23.scope. Mar 17 18:37:41.719987 sshd[3557]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:41.723226 systemd[1]: sshd@22-10.0.0.22:22-10.0.0.1:58850.service: Deactivated successfully. Mar 17 18:37:41.724189 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:37:41.724884 systemd-logind[1189]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:37:41.725705 systemd-logind[1189]: Removed session 23. Mar 17 18:37:46.723648 systemd[1]: Started sshd@23-10.0.0.22:22-10.0.0.1:47560.service. Mar 17 18:37:46.752819 sshd[3575]: Accepted publickey for core from 10.0.0.1 port 47560 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:46.753840 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:46.756919 systemd-logind[1189]: New session 24 of user core. Mar 17 18:37:46.757853 systemd[1]: Started session-24.scope. Mar 17 18:37:46.859004 sshd[3575]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:46.861801 systemd[1]: sshd@23-10.0.0.22:22-10.0.0.1:47560.service: Deactivated successfully. Mar 17 18:37:46.862634 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:37:46.863238 systemd-logind[1189]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:37:46.864089 systemd-logind[1189]: Removed session 24. Mar 17 18:37:48.271626 kubelet[1897]: E0317 18:37:48.271558 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:50.271794 kubelet[1897]: E0317 18:37:50.271505 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:51.270923 kubelet[1897]: E0317 18:37:51.270865 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:51.864945 systemd[1]: Started sshd@24-10.0.0.22:22-10.0.0.1:47564.service. Mar 17 18:37:51.894628 sshd[3589]: Accepted publickey for core from 10.0.0.1 port 47564 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:51.895868 sshd[3589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:51.899973 systemd-logind[1189]: New session 25 of user core. Mar 17 18:37:51.901178 systemd[1]: Started session-25.scope. Mar 17 18:37:52.028957 sshd[3589]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:52.031880 systemd[1]: sshd@24-10.0.0.22:22-10.0.0.1:47564.service: Deactivated successfully. Mar 17 18:37:52.032454 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:37:52.033028 systemd-logind[1189]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:37:52.034191 systemd[1]: Started sshd@25-10.0.0.22:22-10.0.0.1:47580.service. Mar 17 18:37:52.034853 systemd-logind[1189]: Removed session 25. Mar 17 18:37:52.063337 sshd[3602]: Accepted publickey for core from 10.0.0.1 port 47580 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:52.064496 sshd[3602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:52.068243 systemd-logind[1189]: New session 26 of user core. Mar 17 18:37:52.069466 systemd[1]: Started session-26.scope. Mar 17 18:37:53.271055 kubelet[1897]: E0317 18:37:53.270990 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:53.400085 kubelet[1897]: I0317 18:37:53.400021 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-9d44j" podStartSLOduration=98.400000104 podStartE2EDuration="1m38.400000104s" podCreationTimestamp="2025-03-17 18:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:36:44.092028923 +0000 UTC m=+33.893530228" watchObservedRunningTime="2025-03-17 18:37:53.400000104 +0000 UTC m=+103.201501419" Mar 17 18:37:53.406655 env[1207]: time="2025-03-17T18:37:53.406598537Z" level=info msg="StopContainer for \"30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83\" with timeout 30 (s)" Mar 17 18:37:53.407118 env[1207]: time="2025-03-17T18:37:53.407064914Z" level=info msg="Stop container \"30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83\" with signal terminated" Mar 17 18:37:53.417503 systemd[1]: cri-containerd-30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83.scope: Deactivated successfully. Mar 17 18:37:53.426879 kubelet[1897]: E0317 18:37:53.426824 1897 configmap.go:193] Couldn't get configMap kube-system/cilium-config: configmap "cilium-config" not found Mar 17 18:37:53.427729 kubelet[1897]: E0317 18:37:53.427688 1897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-cilium-config-path podName:dc4f98bc-4197-48ff-a30a-9b2e5a659a23 nodeName:}" failed. No retries permitted until 2025-03-17 18:37:53.926905089 +0000 UTC m=+103.728406394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-cilium-config-path") pod "cilium-s9jgp" (UID: "dc4f98bc-4197-48ff-a30a-9b2e5a659a23") : configmap "cilium-config" not found Mar 17 18:37:53.434284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83-rootfs.mount: Deactivated successfully. Mar 17 18:37:53.441714 env[1207]: time="2025-03-17T18:37:53.441645054Z" level=info msg="shim disconnected" id=30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83 Mar 17 18:37:53.441898 env[1207]: time="2025-03-17T18:37:53.441717543Z" level=warning msg="cleaning up after shim disconnected" id=30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83 namespace=k8s.io Mar 17 18:37:53.441898 env[1207]: time="2025-03-17T18:37:53.441735227Z" level=info msg="cleaning up dead shim" Mar 17 18:37:53.449003 env[1207]: time="2025-03-17T18:37:53.448946379Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:37:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3631 runtime=io.containerd.runc.v2\n" Mar 17 18:37:53.452471 env[1207]: time="2025-03-17T18:37:53.452363212Z" level=info msg="StopContainer for \"30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83\" returns successfully" Mar 17 18:37:53.453136 env[1207]: time="2025-03-17T18:37:53.453104688Z" level=info msg="StopPodSandbox for \"ec0d73ffaae6b574cc375735e1aac3e8625213f96732cbc8e452a1cdcffe68ed\"" Mar 17 18:37:53.453214 env[1207]: time="2025-03-17T18:37:53.453187928Z" level=info msg="Container to stop \"30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:37:53.455674 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec0d73ffaae6b574cc375735e1aac3e8625213f96732cbc8e452a1cdcffe68ed-shm.mount: Deactivated successfully. Mar 17 18:37:53.460367 systemd[1]: cri-containerd-ec0d73ffaae6b574cc375735e1aac3e8625213f96732cbc8e452a1cdcffe68ed.scope: Deactivated successfully. Mar 17 18:37:53.479095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec0d73ffaae6b574cc375735e1aac3e8625213f96732cbc8e452a1cdcffe68ed-rootfs.mount: Deactivated successfully. Mar 17 18:37:53.484917 env[1207]: time="2025-03-17T18:37:53.484853331Z" level=info msg="shim disconnected" id=ec0d73ffaae6b574cc375735e1aac3e8625213f96732cbc8e452a1cdcffe68ed Mar 17 18:37:53.485299 env[1207]: time="2025-03-17T18:37:53.485265835Z" level=warning msg="cleaning up after shim disconnected" id=ec0d73ffaae6b574cc375735e1aac3e8625213f96732cbc8e452a1cdcffe68ed namespace=k8s.io Mar 17 18:37:53.485299 env[1207]: time="2025-03-17T18:37:53.485292426Z" level=info msg="cleaning up dead shim" Mar 17 18:37:53.492957 env[1207]: time="2025-03-17T18:37:53.492889309Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:37:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3660 runtime=io.containerd.runc.v2\n" Mar 17 18:37:53.493301 env[1207]: time="2025-03-17T18:37:53.493270373Z" level=info msg="TearDown network for sandbox \"ec0d73ffaae6b574cc375735e1aac3e8625213f96732cbc8e452a1cdcffe68ed\" successfully" Mar 17 18:37:53.493352 env[1207]: time="2025-03-17T18:37:53.493301813Z" level=info msg="StopPodSandbox for \"ec0d73ffaae6b574cc375735e1aac3e8625213f96732cbc8e452a1cdcffe68ed\" returns successfully" Mar 17 18:37:53.530234 kubelet[1897]: I0317 18:37:53.529487 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/000bc3f1-fa83-4612-a007-5b62bc87360b-cilium-config-path\") pod \"000bc3f1-fa83-4612-a007-5b62bc87360b\" (UID: \"000bc3f1-fa83-4612-a007-5b62bc87360b\") " Mar 17 18:37:53.530234 kubelet[1897]: I0317 18:37:53.529549 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ph4s\" (UniqueName: \"kubernetes.io/projected/000bc3f1-fa83-4612-a007-5b62bc87360b-kube-api-access-4ph4s\") pod \"000bc3f1-fa83-4612-a007-5b62bc87360b\" (UID: \"000bc3f1-fa83-4612-a007-5b62bc87360b\") " Mar 17 18:37:53.532146 kubelet[1897]: I0317 18:37:53.532065 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/000bc3f1-fa83-4612-a007-5b62bc87360b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "000bc3f1-fa83-4612-a007-5b62bc87360b" (UID: "000bc3f1-fa83-4612-a007-5b62bc87360b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:37:53.535912 kubelet[1897]: I0317 18:37:53.535835 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/000bc3f1-fa83-4612-a007-5b62bc87360b-kube-api-access-4ph4s" (OuterVolumeSpecName: "kube-api-access-4ph4s") pod "000bc3f1-fa83-4612-a007-5b62bc87360b" (UID: "000bc3f1-fa83-4612-a007-5b62bc87360b"). InnerVolumeSpecName "kube-api-access-4ph4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:37:53.537776 systemd[1]: var-lib-kubelet-pods-000bc3f1\x2dfa83\x2d4612\x2da007\x2d5b62bc87360b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4ph4s.mount: Deactivated successfully. Mar 17 18:37:53.630525 kubelet[1897]: I0317 18:37:53.630472 1897 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/000bc3f1-fa83-4612-a007-5b62bc87360b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:53.630525 kubelet[1897]: I0317 18:37:53.630501 1897 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4ph4s\" (UniqueName: \"kubernetes.io/projected/000bc3f1-fa83-4612-a007-5b62bc87360b-kube-api-access-4ph4s\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:53.931731 kubelet[1897]: E0317 18:37:53.931609 1897 configmap.go:193] Couldn't get configMap kube-system/cilium-config: configmap "cilium-config" not found Mar 17 18:37:53.931731 kubelet[1897]: E0317 18:37:53.931683 1897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-cilium-config-path podName:dc4f98bc-4197-48ff-a30a-9b2e5a659a23 nodeName:}" failed. No retries permitted until 2025-03-17 18:37:54.931668974 +0000 UTC m=+104.733170269 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-cilium-config-path") pod "cilium-s9jgp" (UID: "dc4f98bc-4197-48ff-a30a-9b2e5a659a23") : configmap "cilium-config" not found Mar 17 18:37:53.946801 kubelet[1897]: I0317 18:37:53.946778 1897 scope.go:117] "RemoveContainer" containerID="30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83" Mar 17 18:37:53.948183 env[1207]: time="2025-03-17T18:37:53.948145198Z" level=info msg="RemoveContainer for \"30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83\"" Mar 17 18:37:53.950216 systemd[1]: Removed slice kubepods-besteffort-pod000bc3f1_fa83_4612_a007_5b62bc87360b.slice. Mar 17 18:37:53.952406 env[1207]: time="2025-03-17T18:37:53.952349295Z" level=info msg="RemoveContainer for \"30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83\" returns successfully" Mar 17 18:37:53.953138 kubelet[1897]: I0317 18:37:53.952913 1897 scope.go:117] "RemoveContainer" containerID="30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83" Mar 17 18:37:53.953485 env[1207]: time="2025-03-17T18:37:53.953375529Z" level=error msg="ContainerStatus for \"30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83\": not found" Mar 17 18:37:53.953690 kubelet[1897]: E0317 18:37:53.953669 1897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83\": not found" containerID="30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83" Mar 17 18:37:53.953771 kubelet[1897]: I0317 18:37:53.953699 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83"} err="failed to get container status \"30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83\": rpc error: code = NotFound desc = an error occurred when try to find container \"30a14930bdc38e3a96121d899a1c31d5bc410dca6b4e719602f631df8de1ab83\": not found" Mar 17 18:37:54.002927 env[1207]: time="2025-03-17T18:37:54.002847808Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:37:54.008933 env[1207]: time="2025-03-17T18:37:54.008873698Z" level=info msg="StopContainer for \"b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769\" with timeout 2 (s)" Mar 17 18:37:54.009374 env[1207]: time="2025-03-17T18:37:54.009318013Z" level=info msg="Stop container \"b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769\" with signal terminated" Mar 17 18:37:54.015771 systemd-networkd[1021]: lxc_health: Link DOWN Mar 17 18:37:54.015779 systemd-networkd[1021]: lxc_health: Lost carrier Mar 17 18:37:54.056663 systemd[1]: cri-containerd-b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769.scope: Deactivated successfully. Mar 17 18:37:54.056921 systemd[1]: cri-containerd-b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769.scope: Consumed 6.242s CPU time. Mar 17 18:37:54.077464 env[1207]: time="2025-03-17T18:37:54.077411160Z" level=info msg="shim disconnected" id=b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769 Mar 17 18:37:54.077464 env[1207]: time="2025-03-17T18:37:54.077462730Z" level=warning msg="cleaning up after shim disconnected" id=b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769 namespace=k8s.io Mar 17 18:37:54.077697 env[1207]: time="2025-03-17T18:37:54.077472007Z" level=info msg="cleaning up dead shim" Mar 17 18:37:54.083640 env[1207]: time="2025-03-17T18:37:54.083582059Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:37:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3713 runtime=io.containerd.runc.v2\n" Mar 17 18:37:54.087089 env[1207]: time="2025-03-17T18:37:54.087047920Z" level=info msg="StopContainer for \"b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769\" returns successfully" Mar 17 18:37:54.087617 env[1207]: time="2025-03-17T18:37:54.087567260Z" level=info msg="StopPodSandbox for \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\"" Mar 17 18:37:54.087804 env[1207]: time="2025-03-17T18:37:54.087631604Z" level=info msg="Container to stop \"2d3b50c5af020383ba63509e96e066e90455b9e97be9e76c567f5204dbf032f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:37:54.087804 env[1207]: time="2025-03-17T18:37:54.087644599Z" level=info msg="Container to stop \"b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:37:54.087804 env[1207]: time="2025-03-17T18:37:54.087654328Z" level=info msg="Container to stop \"6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:37:54.087804 env[1207]: time="2025-03-17T18:37:54.087663245Z" level=info msg="Container to stop \"45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:37:54.087804 env[1207]: time="2025-03-17T18:37:54.087671982Z" level=info msg="Container to stop \"eeacf9d83d4b7e2ba540ec2cb8a65894f0c8f0bc4e1e5c68f66a174187554711\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:37:54.092985 systemd[1]: cri-containerd-3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5.scope: Deactivated successfully. Mar 17 18:37:54.141702 env[1207]: time="2025-03-17T18:37:54.141633546Z" level=info msg="shim disconnected" id=3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5 Mar 17 18:37:54.141702 env[1207]: time="2025-03-17T18:37:54.141692239Z" level=warning msg="cleaning up after shim disconnected" id=3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5 namespace=k8s.io Mar 17 18:37:54.141702 env[1207]: time="2025-03-17T18:37:54.141704742Z" level=info msg="cleaning up dead shim" Mar 17 18:37:54.147790 env[1207]: time="2025-03-17T18:37:54.147734510Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:37:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3743 runtime=io.containerd.runc.v2\n" Mar 17 18:37:54.148053 env[1207]: time="2025-03-17T18:37:54.148025500Z" level=info msg="TearDown network for sandbox \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\" successfully" Mar 17 18:37:54.148053 env[1207]: time="2025-03-17T18:37:54.148050208Z" level=info msg="StopPodSandbox for \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\" returns successfully" Mar 17 18:37:54.233817 kubelet[1897]: I0317 18:37:54.233686 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-host-proc-sys-net\") pod \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " Mar 17 18:37:54.233817 kubelet[1897]: I0317 18:37:54.233723 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-hostproc\") pod \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " Mar 17 18:37:54.233817 kubelet[1897]: I0317 18:37:54.233767 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-xtables-lock\") pod \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " Mar 17 18:37:54.233817 kubelet[1897]: I0317 18:37:54.233782 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-bpf-maps\") pod \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " Mar 17 18:37:54.233817 kubelet[1897]: I0317 18:37:54.233806 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvwvc\" (UniqueName: \"kubernetes.io/projected/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-kube-api-access-cvwvc\") pod \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " Mar 17 18:37:54.233817 kubelet[1897]: I0317 18:37:54.233817 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-cni-path\") pod \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " Mar 17 18:37:54.234087 kubelet[1897]: I0317 18:37:54.233828 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-host-proc-sys-kernel\") pod \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " Mar 17 18:37:54.234087 kubelet[1897]: I0317 18:37:54.233843 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-clustermesh-secrets\") pod \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " Mar 17 18:37:54.234087 kubelet[1897]: I0317 18:37:54.233856 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-lib-modules\") pod \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " Mar 17 18:37:54.234087 kubelet[1897]: I0317 18:37:54.233868 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-cilium-run\") pod \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " Mar 17 18:37:54.234087 kubelet[1897]: I0317 18:37:54.233849 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dc4f98bc-4197-48ff-a30a-9b2e5a659a23" (UID: "dc4f98bc-4197-48ff-a30a-9b2e5a659a23"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:37:54.234202 kubelet[1897]: I0317 18:37:54.233907 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dc4f98bc-4197-48ff-a30a-9b2e5a659a23" (UID: "dc4f98bc-4197-48ff-a30a-9b2e5a659a23"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:37:54.234202 kubelet[1897]: I0317 18:37:54.233880 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-etc-cni-netd\") pod \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " Mar 17 18:37:54.234202 kubelet[1897]: I0317 18:37:54.233930 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dc4f98bc-4197-48ff-a30a-9b2e5a659a23" (UID: "dc4f98bc-4197-48ff-a30a-9b2e5a659a23"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:37:54.234300 kubelet[1897]: I0317 18:37:54.234192 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-hubble-tls\") pod \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " Mar 17 18:37:54.234300 kubelet[1897]: I0317 18:37:54.234249 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-cilium-config-path\") pod \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " Mar 17 18:37:54.234300 kubelet[1897]: I0317 18:37:54.234270 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-cilium-cgroup\") pod \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\" (UID: \"dc4f98bc-4197-48ff-a30a-9b2e5a659a23\") " Mar 17 18:37:54.234442 kubelet[1897]: I0317 18:37:54.234337 1897 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:54.234442 kubelet[1897]: I0317 18:37:54.234348 1897 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:54.234442 kubelet[1897]: I0317 18:37:54.234441 1897 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:54.234575 kubelet[1897]: I0317 18:37:54.234464 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dc4f98bc-4197-48ff-a30a-9b2e5a659a23" (UID: "dc4f98bc-4197-48ff-a30a-9b2e5a659a23"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:37:54.234870 kubelet[1897]: I0317 18:37:54.234843 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-cni-path" (OuterVolumeSpecName: "cni-path") pod "dc4f98bc-4197-48ff-a30a-9b2e5a659a23" (UID: "dc4f98bc-4197-48ff-a30a-9b2e5a659a23"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:37:54.234955 kubelet[1897]: I0317 18:37:54.234904 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dc4f98bc-4197-48ff-a30a-9b2e5a659a23" (UID: "dc4f98bc-4197-48ff-a30a-9b2e5a659a23"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:37:54.235349 kubelet[1897]: I0317 18:37:54.235211 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dc4f98bc-4197-48ff-a30a-9b2e5a659a23" (UID: "dc4f98bc-4197-48ff-a30a-9b2e5a659a23"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:37:54.235349 kubelet[1897]: I0317 18:37:54.235255 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dc4f98bc-4197-48ff-a30a-9b2e5a659a23" (UID: "dc4f98bc-4197-48ff-a30a-9b2e5a659a23"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:37:54.235349 kubelet[1897]: I0317 18:37:54.235287 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dc4f98bc-4197-48ff-a30a-9b2e5a659a23" (UID: "dc4f98bc-4197-48ff-a30a-9b2e5a659a23"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:37:54.235349 kubelet[1897]: I0317 18:37:54.235332 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-hostproc" (OuterVolumeSpecName: "hostproc") pod "dc4f98bc-4197-48ff-a30a-9b2e5a659a23" (UID: "dc4f98bc-4197-48ff-a30a-9b2e5a659a23"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:37:54.237118 kubelet[1897]: I0317 18:37:54.237094 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-kube-api-access-cvwvc" (OuterVolumeSpecName: "kube-api-access-cvwvc") pod "dc4f98bc-4197-48ff-a30a-9b2e5a659a23" (UID: "dc4f98bc-4197-48ff-a30a-9b2e5a659a23"). InnerVolumeSpecName "kube-api-access-cvwvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:37:54.237376 kubelet[1897]: I0317 18:37:54.237339 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dc4f98bc-4197-48ff-a30a-9b2e5a659a23" (UID: "dc4f98bc-4197-48ff-a30a-9b2e5a659a23"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:37:54.237582 kubelet[1897]: I0317 18:37:54.237554 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dc4f98bc-4197-48ff-a30a-9b2e5a659a23" (UID: "dc4f98bc-4197-48ff-a30a-9b2e5a659a23"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:37:54.237887 kubelet[1897]: I0317 18:37:54.237862 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dc4f98bc-4197-48ff-a30a-9b2e5a659a23" (UID: "dc4f98bc-4197-48ff-a30a-9b2e5a659a23"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:37:54.272445 kubelet[1897]: I0317 18:37:54.272416 1897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="000bc3f1-fa83-4612-a007-5b62bc87360b" path="/var/lib/kubelet/pods/000bc3f1-fa83-4612-a007-5b62bc87360b/volumes" Mar 17 18:37:54.276133 systemd[1]: Removed slice kubepods-burstable-poddc4f98bc_4197_48ff_a30a_9b2e5a659a23.slice. Mar 17 18:37:54.276221 systemd[1]: kubepods-burstable-poddc4f98bc_4197_48ff_a30a_9b2e5a659a23.slice: Consumed 6.341s CPU time. Mar 17 18:37:54.334935 kubelet[1897]: I0317 18:37:54.334853 1897 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:54.334935 kubelet[1897]: I0317 18:37:54.334896 1897 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:54.334935 kubelet[1897]: I0317 18:37:54.334909 1897 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:54.334935 kubelet[1897]: I0317 18:37:54.334920 1897 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:54.334935 kubelet[1897]: I0317 18:37:54.334930 1897 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:54.334935 kubelet[1897]: I0317 18:37:54.334940 1897 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:54.334935 kubelet[1897]: I0317 18:37:54.334963 1897 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:54.335376 kubelet[1897]: I0317 18:37:54.334976 1897 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:54.335376 kubelet[1897]: I0317 18:37:54.334987 1897 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cvwvc\" (UniqueName: \"kubernetes.io/projected/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-kube-api-access-cvwvc\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:54.335376 kubelet[1897]: I0317 18:37:54.334998 1897 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:54.335376 kubelet[1897]: I0317 18:37:54.335008 1897 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc4f98bc-4197-48ff-a30a-9b2e5a659a23-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:54.435058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769-rootfs.mount: Deactivated successfully. Mar 17 18:37:54.435198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5-rootfs.mount: Deactivated successfully. Mar 17 18:37:54.435292 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5-shm.mount: Deactivated successfully. Mar 17 18:37:54.435376 systemd[1]: var-lib-kubelet-pods-dc4f98bc\x2d4197\x2d48ff\x2da30a\x2d9b2e5a659a23-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcvwvc.mount: Deactivated successfully. Mar 17 18:37:54.435476 systemd[1]: var-lib-kubelet-pods-dc4f98bc\x2d4197\x2d48ff\x2da30a\x2d9b2e5a659a23-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:37:54.435574 systemd[1]: var-lib-kubelet-pods-dc4f98bc\x2d4197\x2d48ff\x2da30a\x2d9b2e5a659a23-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:37:54.960663 kubelet[1897]: I0317 18:37:54.960378 1897 scope.go:117] "RemoveContainer" containerID="b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769" Mar 17 18:37:54.963050 env[1207]: time="2025-03-17T18:37:54.962689247Z" level=info msg="RemoveContainer for \"b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769\"" Mar 17 18:37:54.973927 env[1207]: time="2025-03-17T18:37:54.973855532Z" level=info msg="RemoveContainer for \"b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769\" returns successfully" Mar 17 18:37:54.974609 kubelet[1897]: I0317 18:37:54.974565 1897 scope.go:117] "RemoveContainer" containerID="eeacf9d83d4b7e2ba540ec2cb8a65894f0c8f0bc4e1e5c68f66a174187554711" Mar 17 18:37:54.976185 env[1207]: time="2025-03-17T18:37:54.976060486Z" level=info msg="RemoveContainer for \"eeacf9d83d4b7e2ba540ec2cb8a65894f0c8f0bc4e1e5c68f66a174187554711\"" Mar 17 18:37:54.983470 env[1207]: time="2025-03-17T18:37:54.983380527Z" level=info msg="RemoveContainer for \"eeacf9d83d4b7e2ba540ec2cb8a65894f0c8f0bc4e1e5c68f66a174187554711\" returns successfully" Mar 17 18:37:54.983888 kubelet[1897]: I0317 18:37:54.983798 1897 scope.go:117] "RemoveContainer" containerID="45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02" Mar 17 18:37:54.986447 env[1207]: time="2025-03-17T18:37:54.986375012Z" level=info msg="RemoveContainer for \"45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02\"" Mar 17 18:37:54.997788 env[1207]: time="2025-03-17T18:37:54.997665575Z" level=info msg="RemoveContainer for \"45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02\" returns successfully" Mar 17 18:37:54.998032 kubelet[1897]: I0317 18:37:54.997990 1897 scope.go:117] "RemoveContainer" containerID="2d3b50c5af020383ba63509e96e066e90455b9e97be9e76c567f5204dbf032f1" Mar 17 18:37:55.000101 env[1207]: time="2025-03-17T18:37:55.000030759Z" level=info msg="RemoveContainer for \"2d3b50c5af020383ba63509e96e066e90455b9e97be9e76c567f5204dbf032f1\"" Mar 17 18:37:55.004617 env[1207]: time="2025-03-17T18:37:55.004499294Z" level=info msg="RemoveContainer for \"2d3b50c5af020383ba63509e96e066e90455b9e97be9e76c567f5204dbf032f1\" returns successfully" Mar 17 18:37:55.004994 kubelet[1897]: I0317 18:37:55.004934 1897 scope.go:117] "RemoveContainer" containerID="6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e" Mar 17 18:37:55.006653 env[1207]: time="2025-03-17T18:37:55.006582066Z" level=info msg="RemoveContainer for \"6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e\"" Mar 17 18:37:55.011492 env[1207]: time="2025-03-17T18:37:55.011368757Z" level=info msg="RemoveContainer for \"6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e\" returns successfully" Mar 17 18:37:55.011771 kubelet[1897]: I0317 18:37:55.011726 1897 scope.go:117] "RemoveContainer" containerID="b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769" Mar 17 18:37:55.012136 env[1207]: time="2025-03-17T18:37:55.012055028Z" level=error msg="ContainerStatus for \"b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769\": not found" Mar 17 18:37:55.012568 kubelet[1897]: E0317 18:37:55.012519 1897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769\": not found" containerID="b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769" Mar 17 18:37:55.012672 kubelet[1897]: I0317 18:37:55.012578 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769"} err="failed to get container status \"b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0192bd4ca7e2ffa165eaf290ab8dac3f8715ef8c5ebf8340a9881f4ffba8769\": not found" Mar 17 18:37:55.012672 kubelet[1897]: I0317 18:37:55.012612 1897 scope.go:117] "RemoveContainer" containerID="eeacf9d83d4b7e2ba540ec2cb8a65894f0c8f0bc4e1e5c68f66a174187554711" Mar 17 18:37:55.013067 env[1207]: time="2025-03-17T18:37:55.012977215Z" level=error msg="ContainerStatus for \"eeacf9d83d4b7e2ba540ec2cb8a65894f0c8f0bc4e1e5c68f66a174187554711\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eeacf9d83d4b7e2ba540ec2cb8a65894f0c8f0bc4e1e5c68f66a174187554711\": not found" Mar 17 18:37:55.013254 kubelet[1897]: E0317 18:37:55.013225 1897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eeacf9d83d4b7e2ba540ec2cb8a65894f0c8f0bc4e1e5c68f66a174187554711\": not found" containerID="eeacf9d83d4b7e2ba540ec2cb8a65894f0c8f0bc4e1e5c68f66a174187554711" Mar 17 18:37:55.013312 kubelet[1897]: I0317 18:37:55.013258 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eeacf9d83d4b7e2ba540ec2cb8a65894f0c8f0bc4e1e5c68f66a174187554711"} err="failed to get container status \"eeacf9d83d4b7e2ba540ec2cb8a65894f0c8f0bc4e1e5c68f66a174187554711\": rpc error: code = NotFound desc = an error occurred when try to find container \"eeacf9d83d4b7e2ba540ec2cb8a65894f0c8f0bc4e1e5c68f66a174187554711\": not found" Mar 17 18:37:55.013312 kubelet[1897]: I0317 18:37:55.013281 1897 scope.go:117] "RemoveContainer" containerID="45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02" Mar 17 18:37:55.013757 env[1207]: time="2025-03-17T18:37:55.013662064Z" level=error msg="ContainerStatus for \"45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02\": not found" Mar 17 18:37:55.013949 kubelet[1897]: E0317 18:37:55.013914 1897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02\": not found" containerID="45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02" Mar 17 18:37:55.013949 kubelet[1897]: I0317 18:37:55.013944 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02"} err="failed to get container status \"45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02\": rpc error: code = NotFound desc = an error occurred when try to find container \"45c34afc61f4a8ff5e9a0f6d15787062e370117a666f08c86c08e35e19b9cb02\": not found" Mar 17 18:37:55.014050 kubelet[1897]: I0317 18:37:55.013958 1897 scope.go:117] "RemoveContainer" containerID="2d3b50c5af020383ba63509e96e066e90455b9e97be9e76c567f5204dbf032f1" Mar 17 18:37:55.014219 env[1207]: time="2025-03-17T18:37:55.014164953Z" level=error msg="ContainerStatus for \"2d3b50c5af020383ba63509e96e066e90455b9e97be9e76c567f5204dbf032f1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d3b50c5af020383ba63509e96e066e90455b9e97be9e76c567f5204dbf032f1\": not found" Mar 17 18:37:55.014338 kubelet[1897]: E0317 18:37:55.014305 1897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d3b50c5af020383ba63509e96e066e90455b9e97be9e76c567f5204dbf032f1\": not found" containerID="2d3b50c5af020383ba63509e96e066e90455b9e97be9e76c567f5204dbf032f1" Mar 17 18:37:55.014338 kubelet[1897]: I0317 18:37:55.014329 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d3b50c5af020383ba63509e96e066e90455b9e97be9e76c567f5204dbf032f1"} err="failed to get container status \"2d3b50c5af020383ba63509e96e066e90455b9e97be9e76c567f5204dbf032f1\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d3b50c5af020383ba63509e96e066e90455b9e97be9e76c567f5204dbf032f1\": not found" Mar 17 18:37:55.014338 kubelet[1897]: I0317 18:37:55.014341 1897 scope.go:117] "RemoveContainer" containerID="6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e" Mar 17 18:37:55.014737 env[1207]: time="2025-03-17T18:37:55.014522141Z" level=error msg="ContainerStatus for \"6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e\": not found" Mar 17 18:37:55.014792 kubelet[1897]: E0317 18:37:55.014756 1897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e\": not found" containerID="6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e" Mar 17 18:37:55.014836 kubelet[1897]: I0317 18:37:55.014801 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e"} err="failed to get container status \"6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ba6efd1f4d023be38a54cecb9e4b4a3c6b1e93a167c2634d5ec0be92ff2f63e\": not found" Mar 17 18:37:55.394155 systemd[1]: Started sshd@26-10.0.0.22:22-10.0.0.1:47584.service. Mar 17 18:37:55.402260 sshd[3602]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:55.404770 systemd[1]: sshd@25-10.0.0.22:22-10.0.0.1:47580.service: Deactivated successfully. Mar 17 18:37:55.405842 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 18:37:55.407013 systemd-logind[1189]: Session 26 logged out. Waiting for processes to exit. Mar 17 18:37:55.407992 systemd-logind[1189]: Removed session 26. Mar 17 18:37:55.429595 sshd[3759]: Accepted publickey for core from 10.0.0.1 port 47584 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:55.431154 sshd[3759]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:55.437314 systemd-logind[1189]: New session 27 of user core. Mar 17 18:37:55.438328 systemd[1]: Started session-27.scope. Mar 17 18:37:55.512928 kubelet[1897]: E0317 18:37:55.512304 1897 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:37:56.161669 sshd[3759]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:56.164754 systemd[1]: Started sshd@27-10.0.0.22:22-10.0.0.1:56644.service. Mar 17 18:37:56.167023 systemd[1]: sshd@26-10.0.0.22:22-10.0.0.1:47584.service: Deactivated successfully. Mar 17 18:37:56.167987 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 18:37:56.169084 systemd-logind[1189]: Session 27 logged out. Waiting for processes to exit. Mar 17 18:37:56.171238 systemd-logind[1189]: Removed session 27. Mar 17 18:37:56.179200 kubelet[1897]: E0317 18:37:56.179090 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc4f98bc-4197-48ff-a30a-9b2e5a659a23" containerName="mount-cgroup" Mar 17 18:37:56.179200 kubelet[1897]: E0317 18:37:56.179130 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc4f98bc-4197-48ff-a30a-9b2e5a659a23" containerName="mount-bpf-fs" Mar 17 18:37:56.179200 kubelet[1897]: E0317 18:37:56.179137 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc4f98bc-4197-48ff-a30a-9b2e5a659a23" containerName="clean-cilium-state" Mar 17 18:37:56.179200 kubelet[1897]: E0317 18:37:56.179146 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc4f98bc-4197-48ff-a30a-9b2e5a659a23" containerName="apply-sysctl-overwrites" Mar 17 18:37:56.179200 kubelet[1897]: E0317 18:37:56.179154 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="000bc3f1-fa83-4612-a007-5b62bc87360b" containerName="cilium-operator" Mar 17 18:37:56.179200 kubelet[1897]: E0317 18:37:56.179160 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc4f98bc-4197-48ff-a30a-9b2e5a659a23" containerName="cilium-agent" Mar 17 18:37:56.179200 kubelet[1897]: I0317 18:37:56.179188 1897 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc4f98bc-4197-48ff-a30a-9b2e5a659a23" containerName="cilium-agent" Mar 17 18:37:56.179200 kubelet[1897]: I0317 18:37:56.179198 1897 memory_manager.go:354] "RemoveStaleState removing state" podUID="000bc3f1-fa83-4612-a007-5b62bc87360b" containerName="cilium-operator" Mar 17 18:37:56.187297 systemd[1]: Created slice kubepods-burstable-pod056567e2_5233_401d_8afd_ba86d3ca6801.slice. Mar 17 18:37:56.207133 sshd[3771]: Accepted publickey for core from 10.0.0.1 port 56644 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:56.210597 sshd[3771]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:56.221148 systemd[1]: Started session-28.scope. Mar 17 18:37:56.221501 systemd-logind[1189]: New session 28 of user core. Mar 17 18:37:56.246505 kubelet[1897]: I0317 18:37:56.246451 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-cni-path\") pod \"cilium-2mv4f\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " pod="kube-system/cilium-2mv4f" Mar 17 18:37:56.246505 kubelet[1897]: I0317 18:37:56.246498 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-etc-cni-netd\") pod \"cilium-2mv4f\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " pod="kube-system/cilium-2mv4f" Mar 17 18:37:56.246505 kubelet[1897]: I0317 18:37:56.246520 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/056567e2-5233-401d-8afd-ba86d3ca6801-cilium-ipsec-secrets\") pod \"cilium-2mv4f\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " pod="kube-system/cilium-2mv4f" Mar 17 18:37:56.246746 kubelet[1897]: I0317 18:37:56.246537 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-host-proc-sys-kernel\") pod \"cilium-2mv4f\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " pod="kube-system/cilium-2mv4f" Mar 17 18:37:56.246746 kubelet[1897]: I0317 18:37:56.246552 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-bpf-maps\") pod \"cilium-2mv4f\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " pod="kube-system/cilium-2mv4f" Mar 17 18:37:56.246746 kubelet[1897]: I0317 18:37:56.246568 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-hostproc\") pod \"cilium-2mv4f\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " pod="kube-system/cilium-2mv4f" Mar 17 18:37:56.246746 kubelet[1897]: I0317 18:37:56.246580 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-cilium-cgroup\") pod \"cilium-2mv4f\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " pod="kube-system/cilium-2mv4f" Mar 17 18:37:56.246746 kubelet[1897]: I0317 18:37:56.246592 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/056567e2-5233-401d-8afd-ba86d3ca6801-cilium-config-path\") pod \"cilium-2mv4f\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " pod="kube-system/cilium-2mv4f" Mar 17 18:37:56.246746 kubelet[1897]: I0317 18:37:56.246606 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhbsx\" (UniqueName: \"kubernetes.io/projected/056567e2-5233-401d-8afd-ba86d3ca6801-kube-api-access-hhbsx\") pod \"cilium-2mv4f\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " pod="kube-system/cilium-2mv4f" Mar 17 18:37:56.246907 kubelet[1897]: I0317 18:37:56.246619 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-lib-modules\") pod \"cilium-2mv4f\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " pod="kube-system/cilium-2mv4f" Mar 17 18:37:56.246907 kubelet[1897]: I0317 18:37:56.246631 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-cilium-run\") pod \"cilium-2mv4f\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " pod="kube-system/cilium-2mv4f" Mar 17 18:37:56.246907 kubelet[1897]: I0317 18:37:56.246644 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/056567e2-5233-401d-8afd-ba86d3ca6801-clustermesh-secrets\") pod \"cilium-2mv4f\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " pod="kube-system/cilium-2mv4f" Mar 17 18:37:56.246907 kubelet[1897]: I0317 18:37:56.246656 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-host-proc-sys-net\") pod \"cilium-2mv4f\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " pod="kube-system/cilium-2mv4f" Mar 17 18:37:56.246907 kubelet[1897]: I0317 18:37:56.246668 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/056567e2-5233-401d-8afd-ba86d3ca6801-hubble-tls\") pod \"cilium-2mv4f\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " pod="kube-system/cilium-2mv4f" Mar 17 18:37:56.246907 kubelet[1897]: I0317 18:37:56.246680 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-xtables-lock\") pod \"cilium-2mv4f\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " pod="kube-system/cilium-2mv4f" Mar 17 18:37:56.270639 kubelet[1897]: E0317 18:37:56.270601 1897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-9d44j" podUID="b3a73e71-0212-4061-93e8-aafb3c90e375" Mar 17 18:37:56.272825 kubelet[1897]: I0317 18:37:56.272792 1897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc4f98bc-4197-48ff-a30a-9b2e5a659a23" path="/var/lib/kubelet/pods/dc4f98bc-4197-48ff-a30a-9b2e5a659a23/volumes" Mar 17 18:37:56.345528 sshd[3771]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:56.350375 systemd[1]: Started sshd@28-10.0.0.22:22-10.0.0.1:56654.service. Mar 17 18:37:56.361267 systemd[1]: sshd@27-10.0.0.22:22-10.0.0.1:56644.service: Deactivated successfully. Mar 17 18:37:56.363338 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 18:37:56.366658 systemd-logind[1189]: Session 28 logged out. Waiting for processes to exit. Mar 17 18:37:56.375139 systemd-logind[1189]: Removed session 28. Mar 17 18:37:56.378940 kubelet[1897]: E0317 18:37:56.378895 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:56.379593 env[1207]: time="2025-03-17T18:37:56.379535688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2mv4f,Uid:056567e2-5233-401d-8afd-ba86d3ca6801,Namespace:kube-system,Attempt:0,}" Mar 17 18:37:56.400589 env[1207]: time="2025-03-17T18:37:56.399953172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:37:56.400589 env[1207]: time="2025-03-17T18:37:56.399994321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:37:56.400589 env[1207]: time="2025-03-17T18:37:56.400030191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:37:56.400589 env[1207]: time="2025-03-17T18:37:56.400174268Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/50896bdaabc4cb22934797d8a0ba42832d070b5f3b76c1a842db29d82c64900e pid=3798 runtime=io.containerd.runc.v2 Mar 17 18:37:56.413568 sshd[3786]: Accepted publickey for core from 10.0.0.1 port 56654 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:56.412978 sshd[3786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:56.420122 systemd[1]: Started session-29.scope. Mar 17 18:37:56.420674 systemd-logind[1189]: New session 29 of user core. Mar 17 18:37:56.427563 systemd[1]: Started cri-containerd-50896bdaabc4cb22934797d8a0ba42832d070b5f3b76c1a842db29d82c64900e.scope. Mar 17 18:37:56.450794 env[1207]: time="2025-03-17T18:37:56.450731968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2mv4f,Uid:056567e2-5233-401d-8afd-ba86d3ca6801,Namespace:kube-system,Attempt:0,} returns sandbox id \"50896bdaabc4cb22934797d8a0ba42832d070b5f3b76c1a842db29d82c64900e\"" Mar 17 18:37:56.451588 kubelet[1897]: E0317 18:37:56.451565 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:56.453803 env[1207]: time="2025-03-17T18:37:56.453758783Z" level=info msg="CreateContainer within sandbox \"50896bdaabc4cb22934797d8a0ba42832d070b5f3b76c1a842db29d82c64900e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:37:56.468234 env[1207]: time="2025-03-17T18:37:56.468149170Z" level=info msg="CreateContainer within sandbox \"50896bdaabc4cb22934797d8a0ba42832d070b5f3b76c1a842db29d82c64900e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6f96bc4dc24776e8fbebd68eccd1d30f57c0761e480ca305675e1f1ef75bd46a\"" Mar 17 18:37:56.468922 env[1207]: time="2025-03-17T18:37:56.468888073Z" level=info msg="StartContainer for \"6f96bc4dc24776e8fbebd68eccd1d30f57c0761e480ca305675e1f1ef75bd46a\"" Mar 17 18:37:56.484917 systemd[1]: Started cri-containerd-6f96bc4dc24776e8fbebd68eccd1d30f57c0761e480ca305675e1f1ef75bd46a.scope. Mar 17 18:37:56.504486 systemd[1]: cri-containerd-6f96bc4dc24776e8fbebd68eccd1d30f57c0761e480ca305675e1f1ef75bd46a.scope: Deactivated successfully. Mar 17 18:37:56.526871 env[1207]: time="2025-03-17T18:37:56.526806953Z" level=info msg="shim disconnected" id=6f96bc4dc24776e8fbebd68eccd1d30f57c0761e480ca305675e1f1ef75bd46a Mar 17 18:37:56.526871 env[1207]: time="2025-03-17T18:37:56.526854524Z" level=warning msg="cleaning up after shim disconnected" id=6f96bc4dc24776e8fbebd68eccd1d30f57c0761e480ca305675e1f1ef75bd46a namespace=k8s.io Mar 17 18:37:56.526871 env[1207]: time="2025-03-17T18:37:56.526863562Z" level=info msg="cleaning up dead shim" Mar 17 18:37:56.534947 env[1207]: time="2025-03-17T18:37:56.534881226Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:37:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3865 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:37:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:37:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6f96bc4dc24776e8fbebd68eccd1d30f57c0761e480ca305675e1f1ef75bd46a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 18:37:56.535267 env[1207]: time="2025-03-17T18:37:56.535163980Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Mar 17 18:37:56.535643 env[1207]: time="2025-03-17T18:37:56.535545807Z" level=error msg="Failed to pipe stderr of container \"6f96bc4dc24776e8fbebd68eccd1d30f57c0761e480ca305675e1f1ef75bd46a\"" error="reading from a closed fifo" Mar 17 18:37:56.536445 env[1207]: time="2025-03-17T18:37:56.536345468Z" level=error msg="Failed to pipe stdout of container \"6f96bc4dc24776e8fbebd68eccd1d30f57c0761e480ca305675e1f1ef75bd46a\"" error="reading from a closed fifo" Mar 17 18:37:56.540257 env[1207]: time="2025-03-17T18:37:56.540151625Z" level=error msg="StartContainer for \"6f96bc4dc24776e8fbebd68eccd1d30f57c0761e480ca305675e1f1ef75bd46a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 18:37:56.540613 kubelet[1897]: E0317 18:37:56.540524 1897 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6f96bc4dc24776e8fbebd68eccd1d30f57c0761e480ca305675e1f1ef75bd46a" Mar 17 18:37:56.544714 kubelet[1897]: E0317 18:37:56.544369 1897 kuberuntime_manager.go:1272] "Unhandled Error" err=< Mar 17 18:37:56.544714 kubelet[1897]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 18:37:56.544714 kubelet[1897]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 18:37:56.544714 kubelet[1897]: rm /hostbin/cilium-mount Mar 17 18:37:56.544915 kubelet[1897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hhbsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-2mv4f_kube-system(056567e2-5233-401d-8afd-ba86d3ca6801): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 18:37:56.544915 kubelet[1897]: > logger="UnhandledError" Mar 17 18:37:56.545799 kubelet[1897]: E0317 18:37:56.545731 1897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-2mv4f" podUID="056567e2-5233-401d-8afd-ba86d3ca6801" Mar 17 18:37:56.970630 env[1207]: time="2025-03-17T18:37:56.970577309Z" level=info msg="StopPodSandbox for \"50896bdaabc4cb22934797d8a0ba42832d070b5f3b76c1a842db29d82c64900e\"" Mar 17 18:37:56.970914 env[1207]: time="2025-03-17T18:37:56.970889030Z" level=info msg="Container to stop \"6f96bc4dc24776e8fbebd68eccd1d30f57c0761e480ca305675e1f1ef75bd46a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:37:56.977301 systemd[1]: cri-containerd-50896bdaabc4cb22934797d8a0ba42832d070b5f3b76c1a842db29d82c64900e.scope: Deactivated successfully. Mar 17 18:37:57.009963 env[1207]: time="2025-03-17T18:37:57.009905865Z" level=info msg="shim disconnected" id=50896bdaabc4cb22934797d8a0ba42832d070b5f3b76c1a842db29d82c64900e Mar 17 18:37:57.009963 env[1207]: time="2025-03-17T18:37:57.009960380Z" level=warning msg="cleaning up after shim disconnected" id=50896bdaabc4cb22934797d8a0ba42832d070b5f3b76c1a842db29d82c64900e namespace=k8s.io Mar 17 18:37:57.009963 env[1207]: time="2025-03-17T18:37:57.009971592Z" level=info msg="cleaning up dead shim" Mar 17 18:37:57.018375 env[1207]: time="2025-03-17T18:37:57.018321005Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:37:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3895 runtime=io.containerd.runc.v2\n" Mar 17 18:37:57.018717 env[1207]: time="2025-03-17T18:37:57.018677302Z" level=info msg="TearDown network for sandbox \"50896bdaabc4cb22934797d8a0ba42832d070b5f3b76c1a842db29d82c64900e\" successfully" Mar 17 18:37:57.018717 env[1207]: time="2025-03-17T18:37:57.018709514Z" level=info msg="StopPodSandbox for \"50896bdaabc4cb22934797d8a0ba42832d070b5f3b76c1a842db29d82c64900e\" returns successfully" Mar 17 18:37:57.149979 kubelet[1897]: I0317 18:37:57.149893 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-lib-modules\") pod \"056567e2-5233-401d-8afd-ba86d3ca6801\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " Mar 17 18:37:57.149979 kubelet[1897]: I0317 18:37:57.149938 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "056567e2-5233-401d-8afd-ba86d3ca6801" (UID: "056567e2-5233-401d-8afd-ba86d3ca6801"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:37:57.150247 kubelet[1897]: I0317 18:37:57.149998 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-cni-path\") pod \"056567e2-5233-401d-8afd-ba86d3ca6801\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " Mar 17 18:37:57.150247 kubelet[1897]: I0317 18:37:57.150015 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-host-proc-sys-kernel\") pod \"056567e2-5233-401d-8afd-ba86d3ca6801\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " Mar 17 18:37:57.150247 kubelet[1897]: I0317 18:37:57.150061 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhbsx\" (UniqueName: \"kubernetes.io/projected/056567e2-5233-401d-8afd-ba86d3ca6801-kube-api-access-hhbsx\") pod \"056567e2-5233-401d-8afd-ba86d3ca6801\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " Mar 17 18:37:57.150247 kubelet[1897]: I0317 18:37:57.150079 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/056567e2-5233-401d-8afd-ba86d3ca6801-hubble-tls\") pod \"056567e2-5233-401d-8afd-ba86d3ca6801\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " Mar 17 18:37:57.150247 kubelet[1897]: I0317 18:37:57.150083 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-cni-path" (OuterVolumeSpecName: "cni-path") pod "056567e2-5233-401d-8afd-ba86d3ca6801" (UID: "056567e2-5233-401d-8afd-ba86d3ca6801"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:37:57.150247 kubelet[1897]: I0317 18:37:57.150087 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "056567e2-5233-401d-8afd-ba86d3ca6801" (UID: "056567e2-5233-401d-8afd-ba86d3ca6801"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:37:57.150247 kubelet[1897]: I0317 18:37:57.150093 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-bpf-maps\") pod \"056567e2-5233-401d-8afd-ba86d3ca6801\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " Mar 17 18:37:57.150247 kubelet[1897]: I0317 18:37:57.150130 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-host-proc-sys-net\") pod \"056567e2-5233-401d-8afd-ba86d3ca6801\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " Mar 17 18:37:57.150247 kubelet[1897]: I0317 18:37:57.150133 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "056567e2-5233-401d-8afd-ba86d3ca6801" (UID: "056567e2-5233-401d-8afd-ba86d3ca6801"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:37:57.150247 kubelet[1897]: I0317 18:37:57.150152 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-etc-cni-netd\") pod \"056567e2-5233-401d-8afd-ba86d3ca6801\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " Mar 17 18:37:57.150247 kubelet[1897]: I0317 18:37:57.150173 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-cilium-run\") pod \"056567e2-5233-401d-8afd-ba86d3ca6801\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " Mar 17 18:37:57.150247 kubelet[1897]: I0317 18:37:57.150198 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/056567e2-5233-401d-8afd-ba86d3ca6801-clustermesh-secrets\") pod \"056567e2-5233-401d-8afd-ba86d3ca6801\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " Mar 17 18:37:57.150247 kubelet[1897]: I0317 18:37:57.150217 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-cilium-cgroup\") pod \"056567e2-5233-401d-8afd-ba86d3ca6801\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " Mar 17 18:37:57.150247 kubelet[1897]: I0317 18:37:57.150237 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/056567e2-5233-401d-8afd-ba86d3ca6801-cilium-ipsec-secrets\") pod \"056567e2-5233-401d-8afd-ba86d3ca6801\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " Mar 17 18:37:57.150247 kubelet[1897]: I0317 18:37:57.150254 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-hostproc\") pod \"056567e2-5233-401d-8afd-ba86d3ca6801\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " Mar 17 18:37:57.150947 kubelet[1897]: I0317 18:37:57.150272 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-xtables-lock\") pod \"056567e2-5233-401d-8afd-ba86d3ca6801\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " Mar 17 18:37:57.150947 kubelet[1897]: I0317 18:37:57.150292 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/056567e2-5233-401d-8afd-ba86d3ca6801-cilium-config-path\") pod \"056567e2-5233-401d-8afd-ba86d3ca6801\" (UID: \"056567e2-5233-401d-8afd-ba86d3ca6801\") " Mar 17 18:37:57.150947 kubelet[1897]: I0317 18:37:57.150329 1897 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:57.150947 kubelet[1897]: I0317 18:37:57.150340 1897 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:57.150947 kubelet[1897]: I0317 18:37:57.150351 1897 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:57.150947 kubelet[1897]: I0317 18:37:57.150361 1897 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:57.152608 kubelet[1897]: I0317 18:37:57.152582 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/056567e2-5233-401d-8afd-ba86d3ca6801-kube-api-access-hhbsx" (OuterVolumeSpecName: "kube-api-access-hhbsx") pod "056567e2-5233-401d-8afd-ba86d3ca6801" (UID: "056567e2-5233-401d-8afd-ba86d3ca6801"). InnerVolumeSpecName "kube-api-access-hhbsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:37:57.152873 kubelet[1897]: I0317 18:37:57.152843 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056567e2-5233-401d-8afd-ba86d3ca6801-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "056567e2-5233-401d-8afd-ba86d3ca6801" (UID: "056567e2-5233-401d-8afd-ba86d3ca6801"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:37:57.152924 kubelet[1897]: I0317 18:37:57.152887 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-hostproc" (OuterVolumeSpecName: "hostproc") pod "056567e2-5233-401d-8afd-ba86d3ca6801" (UID: "056567e2-5233-401d-8afd-ba86d3ca6801"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:37:57.152924 kubelet[1897]: I0317 18:37:57.152909 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "056567e2-5233-401d-8afd-ba86d3ca6801" (UID: "056567e2-5233-401d-8afd-ba86d3ca6801"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:37:57.152972 kubelet[1897]: I0317 18:37:57.152928 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "056567e2-5233-401d-8afd-ba86d3ca6801" (UID: "056567e2-5233-401d-8afd-ba86d3ca6801"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:37:57.152972 kubelet[1897]: I0317 18:37:57.152957 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "056567e2-5233-401d-8afd-ba86d3ca6801" (UID: "056567e2-5233-401d-8afd-ba86d3ca6801"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:37:57.153037 kubelet[1897]: I0317 18:37:57.152981 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "056567e2-5233-401d-8afd-ba86d3ca6801" (UID: "056567e2-5233-401d-8afd-ba86d3ca6801"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:37:57.153037 kubelet[1897]: I0317 18:37:57.152999 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "056567e2-5233-401d-8afd-ba86d3ca6801" (UID: "056567e2-5233-401d-8afd-ba86d3ca6801"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:37:57.153112 kubelet[1897]: I0317 18:37:57.153077 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/056567e2-5233-401d-8afd-ba86d3ca6801-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "056567e2-5233-401d-8afd-ba86d3ca6801" (UID: "056567e2-5233-401d-8afd-ba86d3ca6801"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:37:57.153884 kubelet[1897]: I0317 18:37:57.153841 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/056567e2-5233-401d-8afd-ba86d3ca6801-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "056567e2-5233-401d-8afd-ba86d3ca6801" (UID: "056567e2-5233-401d-8afd-ba86d3ca6801"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:37:57.154657 kubelet[1897]: I0317 18:37:57.154622 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/056567e2-5233-401d-8afd-ba86d3ca6801-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "056567e2-5233-401d-8afd-ba86d3ca6801" (UID: "056567e2-5233-401d-8afd-ba86d3ca6801"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:37:57.250997 kubelet[1897]: I0317 18:37:57.250857 1897 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hhbsx\" (UniqueName: \"kubernetes.io/projected/056567e2-5233-401d-8afd-ba86d3ca6801-kube-api-access-hhbsx\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:57.250997 kubelet[1897]: I0317 18:37:57.250893 1897 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/056567e2-5233-401d-8afd-ba86d3ca6801-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:57.250997 kubelet[1897]: I0317 18:37:57.250900 1897 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:57.250997 kubelet[1897]: I0317 18:37:57.250910 1897 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:57.250997 kubelet[1897]: I0317 18:37:57.250918 1897 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:57.250997 kubelet[1897]: I0317 18:37:57.250924 1897 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/056567e2-5233-401d-8afd-ba86d3ca6801-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:57.250997 kubelet[1897]: I0317 18:37:57.250931 1897 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:57.250997 kubelet[1897]: I0317 18:37:57.250938 1897 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/056567e2-5233-401d-8afd-ba86d3ca6801-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:57.250997 kubelet[1897]: I0317 18:37:57.250945 1897 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:57.250997 kubelet[1897]: I0317 18:37:57.250951 1897 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/056567e2-5233-401d-8afd-ba86d3ca6801-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:57.250997 kubelet[1897]: I0317 18:37:57.250957 1897 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/056567e2-5233-401d-8afd-ba86d3ca6801-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:37:57.352356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50896bdaabc4cb22934797d8a0ba42832d070b5f3b76c1a842db29d82c64900e-rootfs.mount: Deactivated successfully. Mar 17 18:37:57.352455 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-50896bdaabc4cb22934797d8a0ba42832d070b5f3b76c1a842db29d82c64900e-shm.mount: Deactivated successfully. Mar 17 18:37:57.352522 systemd[1]: var-lib-kubelet-pods-056567e2\x2d5233\x2d401d\x2d8afd\x2dba86d3ca6801-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhhbsx.mount: Deactivated successfully. Mar 17 18:37:57.352588 systemd[1]: var-lib-kubelet-pods-056567e2\x2d5233\x2d401d\x2d8afd\x2dba86d3ca6801-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:37:57.352646 systemd[1]: var-lib-kubelet-pods-056567e2\x2d5233\x2d401d\x2d8afd\x2dba86d3ca6801-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:37:57.352693 systemd[1]: var-lib-kubelet-pods-056567e2\x2d5233\x2d401d\x2d8afd\x2dba86d3ca6801-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:37:57.972995 kubelet[1897]: I0317 18:37:57.972954 1897 scope.go:117] "RemoveContainer" containerID="6f96bc4dc24776e8fbebd68eccd1d30f57c0761e480ca305675e1f1ef75bd46a" Mar 17 18:37:57.973976 env[1207]: time="2025-03-17T18:37:57.973936896Z" level=info msg="RemoveContainer for \"6f96bc4dc24776e8fbebd68eccd1d30f57c0761e480ca305675e1f1ef75bd46a\"" Mar 17 18:37:57.977283 systemd[1]: Removed slice kubepods-burstable-pod056567e2_5233_401d_8afd_ba86d3ca6801.slice. Mar 17 18:37:58.082500 env[1207]: time="2025-03-17T18:37:58.082445752Z" level=info msg="RemoveContainer for \"6f96bc4dc24776e8fbebd68eccd1d30f57c0761e480ca305675e1f1ef75bd46a\" returns successfully" Mar 17 18:37:58.270839 kubelet[1897]: E0317 18:37:58.270787 1897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-9d44j" podUID="b3a73e71-0212-4061-93e8-aafb3c90e375" Mar 17 18:37:58.499852 kubelet[1897]: E0317 18:37:58.499740 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="056567e2-5233-401d-8afd-ba86d3ca6801" containerName="mount-cgroup" Mar 17 18:37:58.499852 kubelet[1897]: I0317 18:37:58.499775 1897 memory_manager.go:354] "RemoveStaleState removing state" podUID="056567e2-5233-401d-8afd-ba86d3ca6801" containerName="mount-cgroup" Mar 17 18:37:58.504353 systemd[1]: Created slice kubepods-burstable-pod33ac5115_59ba_47fb_9071_54164b0055b1.slice. Mar 17 18:37:58.657143 kubelet[1897]: I0317 18:37:58.657007 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33ac5115-59ba-47fb-9071-54164b0055b1-xtables-lock\") pod \"cilium-nkcjz\" (UID: \"33ac5115-59ba-47fb-9071-54164b0055b1\") " pod="kube-system/cilium-nkcjz" Mar 17 18:37:58.657143 kubelet[1897]: I0317 18:37:58.657040 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc2sm\" (UniqueName: \"kubernetes.io/projected/33ac5115-59ba-47fb-9071-54164b0055b1-kube-api-access-jc2sm\") pod \"cilium-nkcjz\" (UID: \"33ac5115-59ba-47fb-9071-54164b0055b1\") " pod="kube-system/cilium-nkcjz" Mar 17 18:37:58.657143 kubelet[1897]: I0317 18:37:58.657083 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/33ac5115-59ba-47fb-9071-54164b0055b1-clustermesh-secrets\") pod \"cilium-nkcjz\" (UID: \"33ac5115-59ba-47fb-9071-54164b0055b1\") " pod="kube-system/cilium-nkcjz" Mar 17 18:37:58.657143 kubelet[1897]: I0317 18:37:58.657098 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/33ac5115-59ba-47fb-9071-54164b0055b1-hubble-tls\") pod \"cilium-nkcjz\" (UID: \"33ac5115-59ba-47fb-9071-54164b0055b1\") " pod="kube-system/cilium-nkcjz" Mar 17 18:37:58.657143 kubelet[1897]: I0317 18:37:58.657111 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/33ac5115-59ba-47fb-9071-54164b0055b1-bpf-maps\") pod \"cilium-nkcjz\" (UID: \"33ac5115-59ba-47fb-9071-54164b0055b1\") " pod="kube-system/cilium-nkcjz" Mar 17 18:37:58.657143 kubelet[1897]: I0317 18:37:58.657123 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/33ac5115-59ba-47fb-9071-54164b0055b1-cilium-run\") pod \"cilium-nkcjz\" (UID: \"33ac5115-59ba-47fb-9071-54164b0055b1\") " pod="kube-system/cilium-nkcjz" Mar 17 18:37:58.657143 kubelet[1897]: I0317 18:37:58.657135 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33ac5115-59ba-47fb-9071-54164b0055b1-lib-modules\") pod \"cilium-nkcjz\" (UID: \"33ac5115-59ba-47fb-9071-54164b0055b1\") " pod="kube-system/cilium-nkcjz" Mar 17 18:37:58.657143 kubelet[1897]: I0317 18:37:58.657147 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/33ac5115-59ba-47fb-9071-54164b0055b1-host-proc-sys-net\") pod \"cilium-nkcjz\" (UID: \"33ac5115-59ba-47fb-9071-54164b0055b1\") " pod="kube-system/cilium-nkcjz" Mar 17 18:37:58.657574 kubelet[1897]: I0317 18:37:58.657161 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/33ac5115-59ba-47fb-9071-54164b0055b1-cni-path\") pod \"cilium-nkcjz\" (UID: \"33ac5115-59ba-47fb-9071-54164b0055b1\") " pod="kube-system/cilium-nkcjz" Mar 17 18:37:58.657574 kubelet[1897]: I0317 18:37:58.657172 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/33ac5115-59ba-47fb-9071-54164b0055b1-cilium-cgroup\") pod \"cilium-nkcjz\" (UID: \"33ac5115-59ba-47fb-9071-54164b0055b1\") " pod="kube-system/cilium-nkcjz" Mar 17 18:37:58.657574 kubelet[1897]: I0317 18:37:58.657184 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33ac5115-59ba-47fb-9071-54164b0055b1-etc-cni-netd\") pod \"cilium-nkcjz\" (UID: \"33ac5115-59ba-47fb-9071-54164b0055b1\") " pod="kube-system/cilium-nkcjz" Mar 17 18:37:58.657574 kubelet[1897]: I0317 18:37:58.657196 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33ac5115-59ba-47fb-9071-54164b0055b1-cilium-config-path\") pod \"cilium-nkcjz\" (UID: \"33ac5115-59ba-47fb-9071-54164b0055b1\") " pod="kube-system/cilium-nkcjz" Mar 17 18:37:58.657574 kubelet[1897]: I0317 18:37:58.657208 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/33ac5115-59ba-47fb-9071-54164b0055b1-cilium-ipsec-secrets\") pod \"cilium-nkcjz\" (UID: \"33ac5115-59ba-47fb-9071-54164b0055b1\") " pod="kube-system/cilium-nkcjz" Mar 17 18:37:58.657574 kubelet[1897]: I0317 18:37:58.657225 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/33ac5115-59ba-47fb-9071-54164b0055b1-host-proc-sys-kernel\") pod \"cilium-nkcjz\" (UID: \"33ac5115-59ba-47fb-9071-54164b0055b1\") " pod="kube-system/cilium-nkcjz" Mar 17 18:37:58.657574 kubelet[1897]: I0317 18:37:58.657238 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/33ac5115-59ba-47fb-9071-54164b0055b1-hostproc\") pod \"cilium-nkcjz\" (UID: \"33ac5115-59ba-47fb-9071-54164b0055b1\") " pod="kube-system/cilium-nkcjz" Mar 17 18:37:58.808586 kubelet[1897]: E0317 18:37:58.808546 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:58.811544 env[1207]: time="2025-03-17T18:37:58.809527339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nkcjz,Uid:33ac5115-59ba-47fb-9071-54164b0055b1,Namespace:kube-system,Attempt:0,}" Mar 17 18:37:58.824670 env[1207]: time="2025-03-17T18:37:58.824590439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:37:58.824670 env[1207]: time="2025-03-17T18:37:58.824631858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:37:58.824670 env[1207]: time="2025-03-17T18:37:58.824643281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:37:58.824886 env[1207]: time="2025-03-17T18:37:58.824777098Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/90625ae639d5b83df9bc4a5820c13848848e01b814ab008fd90574ba0554ec9d pid=3924 runtime=io.containerd.runc.v2 Mar 17 18:37:58.835362 systemd[1]: Started cri-containerd-90625ae639d5b83df9bc4a5820c13848848e01b814ab008fd90574ba0554ec9d.scope. Mar 17 18:37:58.853245 env[1207]: time="2025-03-17T18:37:58.853189571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nkcjz,Uid:33ac5115-59ba-47fb-9071-54164b0055b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"90625ae639d5b83df9bc4a5820c13848848e01b814ab008fd90574ba0554ec9d\"" Mar 17 18:37:58.854147 kubelet[1897]: E0317 18:37:58.854051 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:58.856422 env[1207]: time="2025-03-17T18:37:58.855941241Z" level=info msg="CreateContainer within sandbox \"90625ae639d5b83df9bc4a5820c13848848e01b814ab008fd90574ba0554ec9d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:37:58.868463 env[1207]: time="2025-03-17T18:37:58.868414794Z" level=info msg="CreateContainer within sandbox \"90625ae639d5b83df9bc4a5820c13848848e01b814ab008fd90574ba0554ec9d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2d54540d23269a7db6d2486f06646133d8bf3a50e60a3db342d3a71c320b2874\"" Mar 17 18:37:58.869632 env[1207]: time="2025-03-17T18:37:58.868846227Z" level=info msg="StartContainer for \"2d54540d23269a7db6d2486f06646133d8bf3a50e60a3db342d3a71c320b2874\"" Mar 17 18:37:58.880949 systemd[1]: Started cri-containerd-2d54540d23269a7db6d2486f06646133d8bf3a50e60a3db342d3a71c320b2874.scope. Mar 17 18:37:58.900424 env[1207]: time="2025-03-17T18:37:58.900356948Z" level=info msg="StartContainer for \"2d54540d23269a7db6d2486f06646133d8bf3a50e60a3db342d3a71c320b2874\" returns successfully" Mar 17 18:37:58.908698 systemd[1]: cri-containerd-2d54540d23269a7db6d2486f06646133d8bf3a50e60a3db342d3a71c320b2874.scope: Deactivated successfully. Mar 17 18:37:58.934859 env[1207]: time="2025-03-17T18:37:58.934810248Z" level=info msg="shim disconnected" id=2d54540d23269a7db6d2486f06646133d8bf3a50e60a3db342d3a71c320b2874 Mar 17 18:37:58.934859 env[1207]: time="2025-03-17T18:37:58.934855246Z" level=warning msg="cleaning up after shim disconnected" id=2d54540d23269a7db6d2486f06646133d8bf3a50e60a3db342d3a71c320b2874 namespace=k8s.io Mar 17 18:37:58.934859 env[1207]: time="2025-03-17T18:37:58.934863331Z" level=info msg="cleaning up dead shim" Mar 17 18:37:58.941464 env[1207]: time="2025-03-17T18:37:58.941371461Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:37:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4007 runtime=io.containerd.runc.v2\n" Mar 17 18:37:58.977900 kubelet[1897]: E0317 18:37:58.977864 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:58.979508 env[1207]: time="2025-03-17T18:37:58.979468042Z" level=info msg="CreateContainer within sandbox \"90625ae639d5b83df9bc4a5820c13848848e01b814ab008fd90574ba0554ec9d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:37:58.999068 env[1207]: time="2025-03-17T18:37:58.998990058Z" level=info msg="CreateContainer within sandbox \"90625ae639d5b83df9bc4a5820c13848848e01b814ab008fd90574ba0554ec9d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d937ad6ca2cea1ba88c299ef9d5e090e14d73da19b2c70b5610b60891d539d83\"" Mar 17 18:37:59.000958 env[1207]: time="2025-03-17T18:37:58.999558695Z" level=info msg="StartContainer for \"d937ad6ca2cea1ba88c299ef9d5e090e14d73da19b2c70b5610b60891d539d83\"" Mar 17 18:37:59.012989 systemd[1]: Started cri-containerd-d937ad6ca2cea1ba88c299ef9d5e090e14d73da19b2c70b5610b60891d539d83.scope. Mar 17 18:37:59.044624 env[1207]: time="2025-03-17T18:37:59.044554891Z" level=info msg="StartContainer for \"d937ad6ca2cea1ba88c299ef9d5e090e14d73da19b2c70b5610b60891d539d83\" returns successfully" Mar 17 18:37:59.049763 systemd[1]: cri-containerd-d937ad6ca2cea1ba88c299ef9d5e090e14d73da19b2c70b5610b60891d539d83.scope: Deactivated successfully. Mar 17 18:37:59.256656 env[1207]: time="2025-03-17T18:37:59.256534685Z" level=info msg="shim disconnected" id=d937ad6ca2cea1ba88c299ef9d5e090e14d73da19b2c70b5610b60891d539d83 Mar 17 18:37:59.256656 env[1207]: time="2025-03-17T18:37:59.256583429Z" level=warning msg="cleaning up after shim disconnected" id=d937ad6ca2cea1ba88c299ef9d5e090e14d73da19b2c70b5610b60891d539d83 namespace=k8s.io Mar 17 18:37:59.256656 env[1207]: time="2025-03-17T18:37:59.256596014Z" level=info msg="cleaning up dead shim" Mar 17 18:37:59.262359 env[1207]: time="2025-03-17T18:37:59.262302222Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:37:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4069 runtime=io.containerd.runc.v2\n" Mar 17 18:37:59.631464 kubelet[1897]: W0317 18:37:59.631402 1897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod056567e2_5233_401d_8afd_ba86d3ca6801.slice/cri-containerd-6f96bc4dc24776e8fbebd68eccd1d30f57c0761e480ca305675e1f1ef75bd46a.scope WatchSource:0}: container "6f96bc4dc24776e8fbebd68eccd1d30f57c0761e480ca305675e1f1ef75bd46a" in namespace "k8s.io": not found Mar 17 18:37:59.981143 kubelet[1897]: E0317 18:37:59.981023 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:59.982936 env[1207]: time="2025-03-17T18:37:59.982882317Z" level=info msg="CreateContainer within sandbox \"90625ae639d5b83df9bc4a5820c13848848e01b814ab008fd90574ba0554ec9d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:37:59.997021 env[1207]: time="2025-03-17T18:37:59.996970581Z" level=info msg="CreateContainer within sandbox \"90625ae639d5b83df9bc4a5820c13848848e01b814ab008fd90574ba0554ec9d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"305d8ab759bcafc4053b3dceaa2e7f57d9e5e8a0c3b29b76667ecc2c81c8fb06\"" Mar 17 18:37:59.997870 env[1207]: time="2025-03-17T18:37:59.997842515Z" level=info msg="StartContainer for \"305d8ab759bcafc4053b3dceaa2e7f57d9e5e8a0c3b29b76667ecc2c81c8fb06\"" Mar 17 18:38:00.014488 systemd[1]: Started cri-containerd-305d8ab759bcafc4053b3dceaa2e7f57d9e5e8a0c3b29b76667ecc2c81c8fb06.scope. Mar 17 18:38:00.041602 env[1207]: time="2025-03-17T18:38:00.041553166Z" level=info msg="StartContainer for \"305d8ab759bcafc4053b3dceaa2e7f57d9e5e8a0c3b29b76667ecc2c81c8fb06\" returns successfully" Mar 17 18:38:00.048856 systemd[1]: cri-containerd-305d8ab759bcafc4053b3dceaa2e7f57d9e5e8a0c3b29b76667ecc2c81c8fb06.scope: Deactivated successfully. Mar 17 18:38:00.071640 env[1207]: time="2025-03-17T18:38:00.071595581Z" level=info msg="shim disconnected" id=305d8ab759bcafc4053b3dceaa2e7f57d9e5e8a0c3b29b76667ecc2c81c8fb06 Mar 17 18:38:00.071640 env[1207]: time="2025-03-17T18:38:00.071637682Z" level=warning msg="cleaning up after shim disconnected" id=305d8ab759bcafc4053b3dceaa2e7f57d9e5e8a0c3b29b76667ecc2c81c8fb06 namespace=k8s.io Mar 17 18:38:00.071640 env[1207]: time="2025-03-17T18:38:00.071646139Z" level=info msg="cleaning up dead shim" Mar 17 18:38:00.078666 env[1207]: time="2025-03-17T18:38:00.078627377Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:38:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4123 runtime=io.containerd.runc.v2\n" Mar 17 18:38:00.271329 kubelet[1897]: E0317 18:38:00.271292 1897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-9d44j" podUID="b3a73e71-0212-4061-93e8-aafb3c90e375" Mar 17 18:38:00.273322 kubelet[1897]: I0317 18:38:00.273284 1897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="056567e2-5233-401d-8afd-ba86d3ca6801" path="/var/lib/kubelet/pods/056567e2-5233-401d-8afd-ba86d3ca6801/volumes" Mar 17 18:38:00.513567 kubelet[1897]: E0317 18:38:00.513520 1897 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:38:00.761965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-305d8ab759bcafc4053b3dceaa2e7f57d9e5e8a0c3b29b76667ecc2c81c8fb06-rootfs.mount: Deactivated successfully. Mar 17 18:38:00.984698 kubelet[1897]: E0317 18:38:00.984663 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:00.985995 env[1207]: time="2025-03-17T18:38:00.985960207Z" level=info msg="CreateContainer within sandbox \"90625ae639d5b83df9bc4a5820c13848848e01b814ab008fd90574ba0554ec9d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:38:01.012037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2833560969.mount: Deactivated successfully. Mar 17 18:38:01.023666 env[1207]: time="2025-03-17T18:38:01.023615977Z" level=info msg="CreateContainer within sandbox \"90625ae639d5b83df9bc4a5820c13848848e01b814ab008fd90574ba0554ec9d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3e4d8dccab47bd0a5a0936fe3ab78f9c7405c4e282b79afad7a67f3f4147fe52\"" Mar 17 18:38:01.025460 env[1207]: time="2025-03-17T18:38:01.025424283Z" level=info msg="StartContainer for \"3e4d8dccab47bd0a5a0936fe3ab78f9c7405c4e282b79afad7a67f3f4147fe52\"" Mar 17 18:38:01.046761 systemd[1]: Started cri-containerd-3e4d8dccab47bd0a5a0936fe3ab78f9c7405c4e282b79afad7a67f3f4147fe52.scope. Mar 17 18:38:01.065637 systemd[1]: cri-containerd-3e4d8dccab47bd0a5a0936fe3ab78f9c7405c4e282b79afad7a67f3f4147fe52.scope: Deactivated successfully. Mar 17 18:38:01.069558 env[1207]: time="2025-03-17T18:38:01.069516981Z" level=info msg="StartContainer for \"3e4d8dccab47bd0a5a0936fe3ab78f9c7405c4e282b79afad7a67f3f4147fe52\" returns successfully" Mar 17 18:38:01.096863 env[1207]: time="2025-03-17T18:38:01.096810875Z" level=info msg="shim disconnected" id=3e4d8dccab47bd0a5a0936fe3ab78f9c7405c4e282b79afad7a67f3f4147fe52 Mar 17 18:38:01.096863 env[1207]: time="2025-03-17T18:38:01.096860702Z" level=warning msg="cleaning up after shim disconnected" id=3e4d8dccab47bd0a5a0936fe3ab78f9c7405c4e282b79afad7a67f3f4147fe52 namespace=k8s.io Mar 17 18:38:01.096863 env[1207]: time="2025-03-17T18:38:01.096869167Z" level=info msg="cleaning up dead shim" Mar 17 18:38:01.102659 env[1207]: time="2025-03-17T18:38:01.102628271Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:38:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4177 runtime=io.containerd.runc.v2\n" Mar 17 18:38:01.762589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e4d8dccab47bd0a5a0936fe3ab78f9c7405c4e282b79afad7a67f3f4147fe52-rootfs.mount: Deactivated successfully. Mar 17 18:38:01.989763 kubelet[1897]: E0317 18:38:01.989714 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:01.991480 env[1207]: time="2025-03-17T18:38:01.991427154Z" level=info msg="CreateContainer within sandbox \"90625ae639d5b83df9bc4a5820c13848848e01b814ab008fd90574ba0554ec9d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:38:02.006361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2526552908.mount: Deactivated successfully. Mar 17 18:38:02.010291 env[1207]: time="2025-03-17T18:38:02.010233139Z" level=info msg="CreateContainer within sandbox \"90625ae639d5b83df9bc4a5820c13848848e01b814ab008fd90574ba0554ec9d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ff4459ca6c8ab521ac2c5e88b4d2062829f2756f70baa47a84b42ee392263bc3\"" Mar 17 18:38:02.011024 env[1207]: time="2025-03-17T18:38:02.010961789Z" level=info msg="StartContainer for \"ff4459ca6c8ab521ac2c5e88b4d2062829f2756f70baa47a84b42ee392263bc3\"" Mar 17 18:38:02.032775 systemd[1]: Started cri-containerd-ff4459ca6c8ab521ac2c5e88b4d2062829f2756f70baa47a84b42ee392263bc3.scope. Mar 17 18:38:02.068235 env[1207]: time="2025-03-17T18:38:02.068163810Z" level=info msg="StartContainer for \"ff4459ca6c8ab521ac2c5e88b4d2062829f2756f70baa47a84b42ee392263bc3\" returns successfully" Mar 17 18:38:02.271399 kubelet[1897]: E0317 18:38:02.271313 1897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-9d44j" podUID="b3a73e71-0212-4061-93e8-aafb3c90e375" Mar 17 18:38:02.390444 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 18:38:02.570867 kubelet[1897]: I0317 18:38:02.570793 1897 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:38:02Z","lastTransitionTime":"2025-03-17T18:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:38:02.740443 kubelet[1897]: W0317 18:38:02.740346 1897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33ac5115_59ba_47fb_9071_54164b0055b1.slice/cri-containerd-2d54540d23269a7db6d2486f06646133d8bf3a50e60a3db342d3a71c320b2874.scope WatchSource:0}: task 2d54540d23269a7db6d2486f06646133d8bf3a50e60a3db342d3a71c320b2874 not found: not found Mar 17 18:38:02.999467 kubelet[1897]: E0317 18:38:02.999331 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:03.018068 kubelet[1897]: I0317 18:38:03.017999 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nkcjz" podStartSLOduration=5.017979587 podStartE2EDuration="5.017979587s" podCreationTimestamp="2025-03-17 18:37:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:38:03.017137447 +0000 UTC m=+112.818638762" watchObservedRunningTime="2025-03-17 18:38:03.017979587 +0000 UTC m=+112.819480892" Mar 17 18:38:04.270783 kubelet[1897]: E0317 18:38:04.270699 1897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-9d44j" podUID="b3a73e71-0212-4061-93e8-aafb3c90e375" Mar 17 18:38:04.776589 systemd[1]: run-containerd-runc-k8s.io-ff4459ca6c8ab521ac2c5e88b4d2062829f2756f70baa47a84b42ee392263bc3-runc.8LY6Ho.mount: Deactivated successfully. Mar 17 18:38:04.810075 kubelet[1897]: E0317 18:38:04.809727 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:05.265000 systemd-networkd[1021]: lxc_health: Link UP Mar 17 18:38:05.320501 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:38:05.320947 systemd-networkd[1021]: lxc_health: Gained carrier Mar 17 18:38:05.848168 kubelet[1897]: W0317 18:38:05.848000 1897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33ac5115_59ba_47fb_9071_54164b0055b1.slice/cri-containerd-d937ad6ca2cea1ba88c299ef9d5e090e14d73da19b2c70b5610b60891d539d83.scope WatchSource:0}: task d937ad6ca2cea1ba88c299ef9d5e090e14d73da19b2c70b5610b60891d539d83 not found: not found Mar 17 18:38:06.271538 kubelet[1897]: E0317 18:38:06.271515 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:06.271758 kubelet[1897]: E0317 18:38:06.271720 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:06.728557 systemd-networkd[1021]: lxc_health: Gained IPv6LL Mar 17 18:38:06.810034 kubelet[1897]: E0317 18:38:06.809960 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:06.994813 systemd[1]: run-containerd-runc-k8s.io-ff4459ca6c8ab521ac2c5e88b4d2062829f2756f70baa47a84b42ee392263bc3-runc.gOGgFJ.mount: Deactivated successfully. Mar 17 18:38:07.006556 kubelet[1897]: E0317 18:38:07.006359 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:08.007534 kubelet[1897]: E0317 18:38:08.007474 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:08.957516 kubelet[1897]: W0317 18:38:08.957461 1897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33ac5115_59ba_47fb_9071_54164b0055b1.slice/cri-containerd-305d8ab759bcafc4053b3dceaa2e7f57d9e5e8a0c3b29b76667ecc2c81c8fb06.scope WatchSource:0}: task 305d8ab759bcafc4053b3dceaa2e7f57d9e5e8a0c3b29b76667ecc2c81c8fb06 not found: not found Mar 17 18:38:10.269667 env[1207]: time="2025-03-17T18:38:10.269609423Z" level=info msg="StopPodSandbox for \"50896bdaabc4cb22934797d8a0ba42832d070b5f3b76c1a842db29d82c64900e\"" Mar 17 18:38:10.270057 env[1207]: time="2025-03-17T18:38:10.269690381Z" level=info msg="TearDown network for sandbox \"50896bdaabc4cb22934797d8a0ba42832d070b5f3b76c1a842db29d82c64900e\" successfully" Mar 17 18:38:10.270057 env[1207]: time="2025-03-17T18:38:10.269724416Z" level=info msg="StopPodSandbox for \"50896bdaabc4cb22934797d8a0ba42832d070b5f3b76c1a842db29d82c64900e\" returns successfully" Mar 17 18:38:10.270057 env[1207]: time="2025-03-17T18:38:10.269992467Z" level=info msg="RemovePodSandbox for \"50896bdaabc4cb22934797d8a0ba42832d070b5f3b76c1a842db29d82c64900e\"" Mar 17 18:38:10.270057 env[1207]: time="2025-03-17T18:38:10.270011594Z" level=info msg="Forcibly stopping sandbox \"50896bdaabc4cb22934797d8a0ba42832d070b5f3b76c1a842db29d82c64900e\"" Mar 17 18:38:10.270159 env[1207]: time="2025-03-17T18:38:10.270061741Z" level=info msg="TearDown network for sandbox \"50896bdaabc4cb22934797d8a0ba42832d070b5f3b76c1a842db29d82c64900e\" successfully" Mar 17 18:38:10.388298 env[1207]: time="2025-03-17T18:38:10.388227182Z" level=info msg="RemovePodSandbox \"50896bdaabc4cb22934797d8a0ba42832d070b5f3b76c1a842db29d82c64900e\" returns successfully" Mar 17 18:38:10.388855 env[1207]: time="2025-03-17T18:38:10.388821586Z" level=info msg="StopPodSandbox for \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\"" Mar 17 18:38:10.389049 env[1207]: time="2025-03-17T18:38:10.388898286Z" level=info msg="TearDown network for sandbox \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\" successfully" Mar 17 18:38:10.389049 env[1207]: time="2025-03-17T18:38:10.388930207Z" level=info msg="StopPodSandbox for \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\" returns successfully" Mar 17 18:38:10.389194 env[1207]: time="2025-03-17T18:38:10.389169492Z" level=info msg="RemovePodSandbox for \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\"" Mar 17 18:38:10.389272 env[1207]: time="2025-03-17T18:38:10.389195492Z" level=info msg="Forcibly stopping sandbox \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\"" Mar 17 18:38:10.389272 env[1207]: time="2025-03-17T18:38:10.389261050Z" level=info msg="TearDown network for sandbox \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\" successfully" Mar 17 18:38:10.426969 env[1207]: time="2025-03-17T18:38:10.426866354Z" level=info msg="RemovePodSandbox \"3d589d9f91b699ea2a980a280e60e70f55581811c1d601d13bd13759b1d958b5\" returns successfully" Mar 17 18:38:10.427499 env[1207]: time="2025-03-17T18:38:10.427452261Z" level=info msg="StopPodSandbox for \"ec0d73ffaae6b574cc375735e1aac3e8625213f96732cbc8e452a1cdcffe68ed\"" Mar 17 18:38:10.427645 env[1207]: time="2025-03-17T18:38:10.427595270Z" level=info msg="TearDown network for sandbox \"ec0d73ffaae6b574cc375735e1aac3e8625213f96732cbc8e452a1cdcffe68ed\" successfully" Mar 17 18:38:10.427712 env[1207]: time="2025-03-17T18:38:10.427642651Z" level=info msg="StopPodSandbox for \"ec0d73ffaae6b574cc375735e1aac3e8625213f96732cbc8e452a1cdcffe68ed\" returns successfully" Mar 17 18:38:10.427942 env[1207]: time="2025-03-17T18:38:10.427917525Z" level=info msg="RemovePodSandbox for \"ec0d73ffaae6b574cc375735e1aac3e8625213f96732cbc8e452a1cdcffe68ed\"" Mar 17 18:38:10.428013 env[1207]: time="2025-03-17T18:38:10.427947142Z" level=info msg="Forcibly stopping sandbox \"ec0d73ffaae6b574cc375735e1aac3e8625213f96732cbc8e452a1cdcffe68ed\"" Mar 17 18:38:10.428057 env[1207]: time="2025-03-17T18:38:10.428015435Z" level=info msg="TearDown network for sandbox \"ec0d73ffaae6b574cc375735e1aac3e8625213f96732cbc8e452a1cdcffe68ed\" successfully" Mar 17 18:38:10.462242 env[1207]: time="2025-03-17T18:38:10.462169730Z" level=info msg="RemovePodSandbox \"ec0d73ffaae6b574cc375735e1aac3e8625213f96732cbc8e452a1cdcffe68ed\" returns successfully" Mar 17 18:38:12.064450 kubelet[1897]: W0317 18:38:12.064369 1897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33ac5115_59ba_47fb_9071_54164b0055b1.slice/cri-containerd-3e4d8dccab47bd0a5a0936fe3ab78f9c7405c4e282b79afad7a67f3f4147fe52.scope WatchSource:0}: task 3e4d8dccab47bd0a5a0936fe3ab78f9c7405c4e282b79afad7a67f3f4147fe52 not found: not found Mar 17 18:38:13.332062 sshd[3786]: pam_unix(sshd:session): session closed for user core Mar 17 18:38:13.334551 systemd[1]: sshd@28-10.0.0.22:22-10.0.0.1:56654.service: Deactivated successfully. Mar 17 18:38:13.335183 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 18:38:13.336231 systemd-logind[1189]: Session 29 logged out. Waiting for processes to exit. Mar 17 18:38:13.336996 systemd-logind[1189]: Removed session 29.