Mar 17 18:41:13.008250 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025 Mar 17 18:41:13.008272 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:41:13.008282 kernel: BIOS-provided physical RAM map: Mar 17 18:41:13.008288 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 17 18:41:13.008293 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 17 18:41:13.008299 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 17 18:41:13.008305 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 17 18:41:13.008311 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 17 18:41:13.008317 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 17 18:41:13.008324 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 17 18:41:13.008329 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 17 18:41:13.008335 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Mar 17 18:41:13.008340 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 17 18:41:13.008346 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 17 18:41:13.008353 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 17 18:41:13.008361 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 17 18:41:13.008367 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 17 18:41:13.008372 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 18:41:13.008378 kernel: NX (Execute Disable) protection: active Mar 17 18:41:13.008384 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Mar 17 18:41:13.008390 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Mar 17 18:41:13.008396 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Mar 17 18:41:13.008402 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Mar 17 18:41:13.008408 kernel: extended physical RAM map: Mar 17 18:41:13.008413 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 17 18:41:13.008421 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 17 18:41:13.008427 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 17 18:41:13.008433 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 17 18:41:13.008439 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 17 18:41:13.008452 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 17 18:41:13.008461 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 17 18:41:13.008467 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Mar 17 18:41:13.008473 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Mar 17 18:41:13.008479 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Mar 17 18:41:13.008485 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Mar 17 18:41:13.008491 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Mar 17 18:41:13.008498 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Mar 17 18:41:13.008505 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 17 18:41:13.008512 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 17 18:41:13.008519 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 17 18:41:13.008530 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 17 18:41:13.008537 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 17 18:41:13.008543 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 18:41:13.008551 kernel: efi: EFI v2.70 by EDK II Mar 17 18:41:13.008557 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Mar 17 18:41:13.008564 kernel: random: crng init done Mar 17 18:41:13.008574 kernel: SMBIOS 2.8 present. Mar 17 18:41:13.008581 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 17 18:41:13.008587 kernel: Hypervisor detected: KVM Mar 17 18:41:13.008594 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 18:41:13.008600 kernel: kvm-clock: cpu 0, msr 5f19a001, primary cpu clock Mar 17 18:41:13.008606 kernel: kvm-clock: using sched offset of 6055038822 cycles Mar 17 18:41:13.008617 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 18:41:13.008624 kernel: tsc: Detected 2794.748 MHz processor Mar 17 18:41:13.008631 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 18:41:13.008638 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 18:41:13.008645 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 17 18:41:13.008651 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 18:41:13.008658 kernel: Using GB pages for direct mapping Mar 17 18:41:13.008664 kernel: Secure boot disabled Mar 17 18:41:13.008671 kernel: ACPI: Early table checksum verification disabled Mar 17 18:41:13.008678 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 17 18:41:13.008685 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 17 18:41:13.008692 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:41:13.008698 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:41:13.008705 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 17 18:41:13.008711 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:41:13.008718 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:41:13.008725 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:41:13.008732 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:41:13.008740 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 17 18:41:13.008746 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 17 18:41:13.008753 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Mar 17 18:41:13.008759 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 17 18:41:13.008766 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 17 18:41:13.008772 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 17 18:41:13.008781 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 17 18:41:13.008787 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 17 18:41:13.008794 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 17 18:41:13.008802 kernel: No NUMA configuration found Mar 17 18:41:13.008809 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 17 18:41:13.008815 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 17 18:41:13.008822 kernel: Zone ranges: Mar 17 18:41:13.008829 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 18:41:13.008835 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 17 18:41:13.008842 kernel: Normal empty Mar 17 18:41:13.008848 kernel: Movable zone start for each node Mar 17 18:41:13.008855 kernel: Early memory node ranges Mar 17 18:41:13.008863 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 17 18:41:13.008869 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 17 18:41:13.008876 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 17 18:41:13.008882 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 17 18:41:13.008889 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 17 18:41:13.008895 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 17 18:41:13.008902 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 17 18:41:13.008911 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 18:41:13.008935 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 17 18:41:13.008944 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 17 18:41:13.008953 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 18:41:13.008960 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 17 18:41:13.008966 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 17 18:41:13.008973 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 17 18:41:13.008979 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 18:41:13.008986 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 18:41:13.008992 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 18:41:13.008999 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 18:41:13.009006 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 18:41:13.009013 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 18:41:13.009020 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 18:41:13.009027 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 18:41:13.009036 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 18:41:13.009043 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 18:41:13.009049 kernel: TSC deadline timer available Mar 17 18:41:13.009056 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 17 18:41:13.009062 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 17 18:41:13.009069 kernel: kvm-guest: setup PV sched yield Mar 17 18:41:13.009077 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 17 18:41:13.009093 kernel: Booting paravirtualized kernel on KVM Mar 17 18:41:13.009108 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 18:41:13.009119 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Mar 17 18:41:13.009128 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Mar 17 18:41:13.009137 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Mar 17 18:41:13.009145 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 17 18:41:13.009157 kernel: kvm-guest: setup async PF for cpu 0 Mar 17 18:41:13.009166 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Mar 17 18:41:13.009175 kernel: kvm-guest: PV spinlocks enabled Mar 17 18:41:13.009183 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 18:41:13.009194 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 17 18:41:13.009201 kernel: Policy zone: DMA32 Mar 17 18:41:13.009209 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:41:13.009216 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:41:13.009223 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:41:13.009231 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:41:13.009238 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:41:13.009246 kernel: Memory: 2397432K/2567000K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 169308K reserved, 0K cma-reserved) Mar 17 18:41:13.009253 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 18:41:13.009260 kernel: ftrace: allocating 34580 entries in 136 pages Mar 17 18:41:13.009266 kernel: ftrace: allocated 136 pages with 2 groups Mar 17 18:41:13.009278 kernel: rcu: Hierarchical RCU implementation. Mar 17 18:41:13.009286 kernel: rcu: RCU event tracing is enabled. Mar 17 18:41:13.009294 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 18:41:13.009301 kernel: Rude variant of Tasks RCU enabled. Mar 17 18:41:13.009308 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:41:13.009315 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:41:13.009322 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 18:41:13.009329 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 17 18:41:13.009336 kernel: Console: colour dummy device 80x25 Mar 17 18:41:13.009343 kernel: printk: console [ttyS0] enabled Mar 17 18:41:13.009350 kernel: ACPI: Core revision 20210730 Mar 17 18:41:13.009357 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 18:41:13.009365 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 18:41:13.009372 kernel: x2apic enabled Mar 17 18:41:13.009379 kernel: Switched APIC routing to physical x2apic. Mar 17 18:41:13.009386 kernel: kvm-guest: setup PV IPIs Mar 17 18:41:13.009393 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 18:41:13.009403 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 18:41:13.009411 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Mar 17 18:41:13.009418 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 18:41:13.009424 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 17 18:41:13.009433 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 17 18:41:13.009440 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 18:41:13.009446 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 18:41:13.009453 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 18:41:13.009460 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 18:41:13.009467 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 17 18:41:13.009474 kernel: RETBleed: Mitigation: untrained return thunk Mar 17 18:41:13.009484 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 18:41:13.009492 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Mar 17 18:41:13.009499 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 18:41:13.009508 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 18:41:13.009515 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 18:41:13.009522 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 18:41:13.009529 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 18:41:13.009536 kernel: Freeing SMP alternatives memory: 32K Mar 17 18:41:13.009551 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:41:13.009558 kernel: LSM: Security Framework initializing Mar 17 18:41:13.009567 kernel: SELinux: Initializing. Mar 17 18:41:13.009574 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:41:13.009581 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:41:13.009588 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 17 18:41:13.009595 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 17 18:41:13.009602 kernel: ... version: 0 Mar 17 18:41:13.009609 kernel: ... bit width: 48 Mar 17 18:41:13.009615 kernel: ... generic registers: 6 Mar 17 18:41:13.009622 kernel: ... value mask: 0000ffffffffffff Mar 17 18:41:13.009630 kernel: ... max period: 00007fffffffffff Mar 17 18:41:13.009637 kernel: ... fixed-purpose events: 0 Mar 17 18:41:13.009644 kernel: ... event mask: 000000000000003f Mar 17 18:41:13.009651 kernel: signal: max sigframe size: 1776 Mar 17 18:41:13.009657 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:41:13.009664 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:41:13.009671 kernel: x86: Booting SMP configuration: Mar 17 18:41:13.009678 kernel: .... node #0, CPUs: #1 Mar 17 18:41:13.009685 kernel: kvm-clock: cpu 1, msr 5f19a041, secondary cpu clock Mar 17 18:41:13.009691 kernel: kvm-guest: setup async PF for cpu 1 Mar 17 18:41:13.009700 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Mar 17 18:41:13.009706 kernel: #2 Mar 17 18:41:13.009713 kernel: kvm-clock: cpu 2, msr 5f19a081, secondary cpu clock Mar 17 18:41:13.009720 kernel: kvm-guest: setup async PF for cpu 2 Mar 17 18:41:13.009727 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Mar 17 18:41:13.009734 kernel: #3 Mar 17 18:41:13.009740 kernel: kvm-clock: cpu 3, msr 5f19a0c1, secondary cpu clock Mar 17 18:41:13.009747 kernel: kvm-guest: setup async PF for cpu 3 Mar 17 18:41:13.009754 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Mar 17 18:41:13.009762 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 18:41:13.009769 kernel: smpboot: Max logical packages: 1 Mar 17 18:41:13.009776 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Mar 17 18:41:13.009783 kernel: devtmpfs: initialized Mar 17 18:41:13.009790 kernel: x86/mm: Memory block size: 128MB Mar 17 18:41:13.009797 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 17 18:41:13.009804 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 17 18:41:13.009811 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 17 18:41:13.009818 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 17 18:41:13.009826 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 17 18:41:13.009841 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:41:13.009858 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 18:41:13.009874 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:41:13.009898 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:41:13.009927 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:41:13.009934 kernel: audit: type=2000 audit(1742236872.451:1): state=initialized audit_enabled=0 res=1 Mar 17 18:41:13.009941 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:41:13.009948 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 18:41:13.009958 kernel: cpuidle: using governor menu Mar 17 18:41:13.009964 kernel: ACPI: bus type PCI registered Mar 17 18:41:13.009972 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:41:13.009979 kernel: dca service started, version 1.12.1 Mar 17 18:41:13.009986 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 17 18:41:13.009993 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Mar 17 18:41:13.010000 kernel: PCI: Using configuration type 1 for base access Mar 17 18:41:13.010007 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 18:41:13.010014 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:41:13.010022 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:41:13.010029 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:41:13.010036 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:41:13.010043 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:41:13.010050 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:41:13.010057 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:41:13.010064 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:41:13.010070 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:41:13.010077 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:41:13.010095 kernel: ACPI: Interpreter enabled Mar 17 18:41:13.010103 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 18:41:13.010112 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 18:41:13.010140 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 18:41:13.010151 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 18:41:13.010159 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 18:41:13.010318 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:41:13.010400 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 17 18:41:13.010478 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 17 18:41:13.010487 kernel: PCI host bridge to bus 0000:00 Mar 17 18:41:13.010583 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 18:41:13.010654 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 18:41:13.010720 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 18:41:13.010787 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 17 18:41:13.010853 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 18:41:13.010960 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 17 18:41:13.011244 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 18:41:13.011371 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 18:41:13.011463 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 17 18:41:13.011544 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 17 18:41:13.011618 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 17 18:41:13.011695 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 17 18:41:13.011769 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 17 18:41:13.011841 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 18:41:13.011948 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 18:41:13.012029 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 17 18:41:13.012126 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 17 18:41:13.012215 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 17 18:41:13.012309 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 17 18:41:13.012384 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 17 18:41:13.012457 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 17 18:41:13.012529 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 17 18:41:13.012618 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 18:41:13.012694 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 17 18:41:13.012770 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 17 18:41:13.012843 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 17 18:41:13.013997 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 17 18:41:13.014128 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 18:41:13.014214 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 18:41:13.014310 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 18:41:13.014386 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 17 18:41:13.014466 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 17 18:41:13.014558 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 18:41:13.014634 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 17 18:41:13.014643 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 18:41:13.014651 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 18:41:13.014658 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 18:41:13.014665 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 18:41:13.014671 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 18:41:13.014681 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 18:41:13.014688 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 18:41:13.014694 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 18:41:13.014701 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 18:41:13.014708 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 18:41:13.014715 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 18:41:13.014722 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 18:41:13.014729 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 18:41:13.014736 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 18:41:13.014744 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 18:41:13.014751 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 18:41:13.014758 kernel: iommu: Default domain type: Translated Mar 17 18:41:13.014765 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 18:41:13.014838 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 18:41:13.016380 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 18:41:13.016463 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 18:41:13.016473 kernel: vgaarb: loaded Mar 17 18:41:13.016480 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:41:13.016669 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:41:13.016678 kernel: PTP clock support registered Mar 17 18:41:13.016685 kernel: Registered efivars operations Mar 17 18:41:13.016692 kernel: PCI: Using ACPI for IRQ routing Mar 17 18:41:13.016699 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 18:41:13.016705 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 17 18:41:13.016712 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 17 18:41:13.016719 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Mar 17 18:41:13.016726 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Mar 17 18:41:13.016735 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 17 18:41:13.016742 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 17 18:41:13.016749 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 18:41:13.016756 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 18:41:13.016763 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 18:41:13.016770 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:41:13.016777 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:41:13.016784 kernel: pnp: PnP ACPI init Mar 17 18:41:13.021441 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 17 18:41:13.021460 kernel: pnp: PnP ACPI: found 6 devices Mar 17 18:41:13.021468 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 18:41:13.021475 kernel: NET: Registered PF_INET protocol family Mar 17 18:41:13.021482 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:41:13.021489 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:41:13.021497 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:41:13.021504 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:41:13.021511 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 18:41:13.021519 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:41:13.021527 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:41:13.021536 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:41:13.021543 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:41:13.021551 kernel: NET: Registered PF_XDP protocol family Mar 17 18:41:13.021634 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 17 18:41:13.021708 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 17 18:41:13.021775 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 18:41:13.021842 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 18:41:13.021907 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 18:41:13.022004 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 17 18:41:13.022104 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 17 18:41:13.022214 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 17 18:41:13.022226 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:41:13.022233 kernel: Initialise system trusted keyrings Mar 17 18:41:13.022241 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:41:13.022252 kernel: Key type asymmetric registered Mar 17 18:41:13.022271 kernel: Asymmetric key parser 'x509' registered Mar 17 18:41:13.022279 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:41:13.022299 kernel: io scheduler mq-deadline registered Mar 17 18:41:13.022307 kernel: io scheduler kyber registered Mar 17 18:41:13.022315 kernel: io scheduler bfq registered Mar 17 18:41:13.022322 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 18:41:13.022330 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 18:41:13.022337 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 18:41:13.022345 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 17 18:41:13.022353 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:41:13.022360 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 18:41:13.022367 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 18:41:13.022384 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 18:41:13.022395 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 18:41:13.022501 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 18:41:13.022514 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 18:41:13.022585 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 18:41:13.022676 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T18:41:12 UTC (1742236872) Mar 17 18:41:13.022762 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 17 18:41:13.022773 kernel: efifb: probing for efifb Mar 17 18:41:13.022781 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 17 18:41:13.022801 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 17 18:41:13.022809 kernel: efifb: scrolling: redraw Mar 17 18:41:13.022816 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 17 18:41:13.022823 kernel: Console: switching to colour frame buffer device 160x50 Mar 17 18:41:13.022834 kernel: fb0: EFI VGA frame buffer device Mar 17 18:41:13.022841 kernel: pstore: Registered efi as persistent store backend Mar 17 18:41:13.022848 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:41:13.022856 kernel: Segment Routing with IPv6 Mar 17 18:41:13.022876 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:41:13.022885 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:41:13.022895 kernel: Key type dns_resolver registered Mar 17 18:41:13.022902 kernel: IPI shorthand broadcast: enabled Mar 17 18:41:13.022909 kernel: sched_clock: Marking stable (542001674, 127288904)->(696404447, -27113869) Mar 17 18:41:13.027031 kernel: registered taskstats version 1 Mar 17 18:41:13.027056 kernel: Loading compiled-in X.509 certificates Mar 17 18:41:13.027066 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220' Mar 17 18:41:13.027074 kernel: Key type .fscrypt registered Mar 17 18:41:13.027092 kernel: Key type fscrypt-provisioning registered Mar 17 18:41:13.027100 kernel: pstore: Using crash dump compression: deflate Mar 17 18:41:13.027113 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:41:13.027121 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:41:13.027129 kernel: ima: No architecture policies found Mar 17 18:41:13.027136 kernel: clk: Disabling unused clocks Mar 17 18:41:13.027144 kernel: Freeing unused kernel image (initmem) memory: 47472K Mar 17 18:41:13.027153 kernel: Write protecting the kernel read-only data: 28672k Mar 17 18:41:13.027160 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Mar 17 18:41:13.027168 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Mar 17 18:41:13.027176 kernel: Run /init as init process Mar 17 18:41:13.027185 kernel: with arguments: Mar 17 18:41:13.027194 kernel: /init Mar 17 18:41:13.027201 kernel: with environment: Mar 17 18:41:13.027208 kernel: HOME=/ Mar 17 18:41:13.027215 kernel: TERM=linux Mar 17 18:41:13.027223 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:41:13.027235 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:41:13.027248 systemd[1]: Detected virtualization kvm. Mar 17 18:41:13.027258 systemd[1]: Detected architecture x86-64. Mar 17 18:41:13.027265 systemd[1]: Running in initrd. Mar 17 18:41:13.027273 systemd[1]: No hostname configured, using default hostname. Mar 17 18:41:13.027281 systemd[1]: Hostname set to . Mar 17 18:41:13.027289 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:41:13.027297 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:41:13.027305 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:41:13.027312 systemd[1]: Reached target cryptsetup.target. Mar 17 18:41:13.027322 systemd[1]: Reached target paths.target. Mar 17 18:41:13.027330 systemd[1]: Reached target slices.target. Mar 17 18:41:13.027338 systemd[1]: Reached target swap.target. Mar 17 18:41:13.027348 systemd[1]: Reached target timers.target. Mar 17 18:41:13.027356 systemd[1]: Listening on iscsid.socket. Mar 17 18:41:13.027364 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:41:13.027372 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:41:13.027382 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:41:13.027390 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:41:13.027398 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:41:13.027407 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:41:13.027415 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:41:13.027423 systemd[1]: Reached target sockets.target. Mar 17 18:41:13.027431 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:41:13.027438 systemd[1]: Finished network-cleanup.service. Mar 17 18:41:13.027446 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:41:13.027456 systemd[1]: Starting systemd-journald.service... Mar 17 18:41:13.027464 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:41:13.027472 systemd[1]: Starting systemd-resolved.service... Mar 17 18:41:13.027480 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:41:13.027487 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:41:13.027495 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:41:13.027504 kernel: audit: type=1130 audit(1742236873.006:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.027512 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:41:13.027520 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:41:13.027530 kernel: audit: type=1130 audit(1742236873.016:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.027538 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:41:13.027548 kernel: audit: type=1130 audit(1742236873.022:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.027557 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:41:13.027572 systemd-journald[197]: Journal started Mar 17 18:41:13.027639 systemd-journald[197]: Runtime Journal (/run/log/journal/fd93f6f0d41442589f7072a21cb31917) is 6.0M, max 48.4M, 42.4M free. Mar 17 18:41:13.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.010195 systemd-modules-load[198]: Inserted module 'overlay' Mar 17 18:41:13.029553 systemd[1]: Started systemd-journald.service. Mar 17 18:41:13.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.032701 kernel: audit: type=1130 audit(1742236873.027:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.040213 systemd-resolved[199]: Positive Trust Anchors: Mar 17 18:41:13.040893 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:41:13.040943 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:41:13.052345 kernel: audit: type=1130 audit(1742236873.047:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.043530 systemd-resolved[199]: Defaulting to hostname 'linux'. Mar 17 18:41:13.044340 systemd[1]: Started systemd-resolved.service. Mar 17 18:41:13.048549 systemd[1]: Reached target nss-lookup.target. Mar 17 18:41:13.057765 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:41:13.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.054365 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:41:13.061443 kernel: audit: type=1130 audit(1742236873.056:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.061419 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:41:13.063564 kernel: Bridge firewalling registered Mar 17 18:41:13.062758 systemd-modules-load[198]: Inserted module 'br_netfilter' Mar 17 18:41:13.069480 dracut-cmdline[216]: dracut-dracut-053 Mar 17 18:41:13.071530 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:41:13.081961 kernel: SCSI subsystem initialized Mar 17 18:41:13.093241 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:41:13.093308 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:41:13.093327 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:41:13.098190 systemd-modules-load[198]: Inserted module 'dm_multipath' Mar 17 18:41:13.100145 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:41:13.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.101969 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:41:13.105939 kernel: audit: type=1130 audit(1742236873.101:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.111320 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:41:13.115704 kernel: audit: type=1130 audit(1742236873.110:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.142944 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:41:13.158949 kernel: iscsi: registered transport (tcp) Mar 17 18:41:13.180967 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:41:13.181045 kernel: QLogic iSCSI HBA Driver Mar 17 18:41:13.211539 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:41:13.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.214202 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:41:13.217518 kernel: audit: type=1130 audit(1742236873.213:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.262966 kernel: raid6: avx2x4 gen() 28669 MB/s Mar 17 18:41:13.279969 kernel: raid6: avx2x4 xor() 7761 MB/s Mar 17 18:41:13.296961 kernel: raid6: avx2x2 gen() 31683 MB/s Mar 17 18:41:13.314417 kernel: raid6: avx2x2 xor() 15952 MB/s Mar 17 18:41:13.330963 kernel: raid6: avx2x1 gen() 25016 MB/s Mar 17 18:41:13.347951 kernel: raid6: avx2x1 xor() 14874 MB/s Mar 17 18:41:13.364952 kernel: raid6: sse2x4 gen() 10850 MB/s Mar 17 18:41:13.381954 kernel: raid6: sse2x4 xor() 6371 MB/s Mar 17 18:41:13.398938 kernel: raid6: sse2x2 gen() 16293 MB/s Mar 17 18:41:13.415944 kernel: raid6: sse2x2 xor() 9787 MB/s Mar 17 18:41:13.432946 kernel: raid6: sse2x1 gen() 12520 MB/s Mar 17 18:41:13.450360 kernel: raid6: sse2x1 xor() 7798 MB/s Mar 17 18:41:13.450371 kernel: raid6: using algorithm avx2x2 gen() 31683 MB/s Mar 17 18:41:13.450392 kernel: raid6: .... xor() 15952 MB/s, rmw enabled Mar 17 18:41:13.451089 kernel: raid6: using avx2x2 recovery algorithm Mar 17 18:41:13.462942 kernel: xor: automatically using best checksumming function avx Mar 17 18:41:13.559965 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Mar 17 18:41:13.569718 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:41:13.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.571000 audit: BPF prog-id=7 op=LOAD Mar 17 18:41:13.571000 audit: BPF prog-id=8 op=LOAD Mar 17 18:41:13.572814 systemd[1]: Starting systemd-udevd.service... Mar 17 18:41:13.585688 systemd-udevd[400]: Using default interface naming scheme 'v252'. Mar 17 18:41:13.589721 systemd[1]: Started systemd-udevd.service. Mar 17 18:41:13.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.593452 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:41:13.603975 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Mar 17 18:41:13.631426 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:41:13.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.633406 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:41:13.671474 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:41:13.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.703273 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 18:41:13.709368 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:41:13.709385 kernel: GPT:9289727 != 19775487 Mar 17 18:41:13.709397 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:41:13.709408 kernel: GPT:9289727 != 19775487 Mar 17 18:41:13.709418 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:41:13.709432 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:41:13.712941 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:41:13.724212 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 18:41:13.724238 kernel: AES CTR mode by8 optimization enabled Mar 17 18:41:13.724945 kernel: libata version 3.00 loaded. Mar 17 18:41:13.733001 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 18:41:13.793704 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 18:41:13.793723 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 18:41:13.793832 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 18:41:13.793941 kernel: scsi host0: ahci Mar 17 18:41:13.794052 kernel: scsi host1: ahci Mar 17 18:41:13.794174 kernel: scsi host2: ahci Mar 17 18:41:13.794281 kernel: scsi host3: ahci Mar 17 18:41:13.794384 kernel: scsi host4: ahci Mar 17 18:41:13.794485 kernel: scsi host5: ahci Mar 17 18:41:13.794595 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 17 18:41:13.794605 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 17 18:41:13.794614 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 17 18:41:13.794623 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 17 18:41:13.794632 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (456) Mar 17 18:41:13.794641 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 17 18:41:13.794649 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 17 18:41:13.786659 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:41:13.796731 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:41:13.797958 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:41:13.805458 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:41:13.809526 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:41:13.811285 systemd[1]: Starting disk-uuid.service... Mar 17 18:41:13.818256 disk-uuid[530]: Primary Header is updated. Mar 17 18:41:13.818256 disk-uuid[530]: Secondary Entries is updated. Mar 17 18:41:13.818256 disk-uuid[530]: Secondary Header is updated. Mar 17 18:41:13.821962 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:41:13.824952 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:41:13.827941 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:41:14.100429 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 17 18:41:14.100512 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 17 18:41:14.100522 kernel: ata3.00: applying bridge limits Mar 17 18:41:14.103694 kernel: ata3.00: configured for UDMA/100 Mar 17 18:41:14.103783 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 17 18:41:14.103793 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 18:41:14.105948 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 18:41:14.109944 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 18:41:14.109967 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 18:41:14.110939 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 18:41:14.144054 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 17 18:41:14.161640 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 18:41:14.161653 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 17 18:41:14.827115 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:41:14.827177 disk-uuid[531]: The operation has completed successfully. Mar 17 18:41:14.847737 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:41:14.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:14.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:14.847852 systemd[1]: Finished disk-uuid.service. Mar 17 18:41:14.859400 systemd[1]: Starting verity-setup.service... Mar 17 18:41:14.870936 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 17 18:41:14.889788 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:41:14.892222 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:41:14.895345 systemd[1]: Finished verity-setup.service. Mar 17 18:41:14.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:14.950695 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:41:14.952044 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:41:14.951054 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:41:14.952097 systemd[1]: Starting ignition-setup.service... Mar 17 18:41:14.954600 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:41:14.965687 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:41:14.965725 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:41:14.965735 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:41:14.972675 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:41:14.980119 systemd[1]: Finished ignition-setup.service. Mar 17 18:41:14.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:14.981269 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:41:15.018992 ignition[649]: Ignition 2.14.0 Mar 17 18:41:15.019031 ignition[649]: Stage: fetch-offline Mar 17 18:41:15.019185 ignition[649]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:41:15.019198 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:41:15.019329 ignition[649]: parsed url from cmdline: "" Mar 17 18:41:15.022999 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:41:15.019333 ignition[649]: no config URL provided Mar 17 18:41:15.019339 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:41:15.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:15.027000 audit: BPF prog-id=9 op=LOAD Mar 17 18:41:15.019348 ignition[649]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:41:15.020230 ignition[649]: op(1): [started] loading QEMU firmware config module Mar 17 18:41:15.028238 systemd[1]: Starting systemd-networkd.service... Mar 17 18:41:15.020237 ignition[649]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 18:41:15.025819 ignition[649]: op(1): [finished] loading QEMU firmware config module Mar 17 18:41:15.025834 ignition[649]: QEMU firmware config was not found. Ignoring... Mar 17 18:41:15.068659 ignition[649]: parsing config with SHA512: 428d9170e8378db103e32c4969e81b633eb95d132f6cce8c348e1be00caac0fac4ea80cb449c9cda5c40acdebcca2bd174eb58840dc36e394b266e8364135bea Mar 17 18:41:15.076488 unknown[649]: fetched base config from "system" Mar 17 18:41:15.076511 unknown[649]: fetched user config from "qemu" Mar 17 18:41:15.079198 ignition[649]: fetch-offline: fetch-offline passed Mar 17 18:41:15.079293 ignition[649]: Ignition finished successfully Mar 17 18:41:15.081538 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:41:15.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:15.088378 systemd-networkd[723]: lo: Link UP Mar 17 18:41:15.088387 systemd-networkd[723]: lo: Gained carrier Mar 17 18:41:15.088908 systemd-networkd[723]: Enumeration completed Mar 17 18:41:15.089196 systemd-networkd[723]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:41:15.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:15.089396 systemd[1]: Started systemd-networkd.service. Mar 17 18:41:15.090608 systemd-networkd[723]: eth0: Link UP Mar 17 18:41:15.090612 systemd-networkd[723]: eth0: Gained carrier Mar 17 18:41:15.094437 systemd[1]: Reached target network.target. Mar 17 18:41:15.097565 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 18:41:15.099842 systemd[1]: Starting ignition-kargs.service... Mar 17 18:41:15.101933 systemd[1]: Starting iscsiuio.service... Mar 17 18:41:15.103297 systemd-networkd[723]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:41:15.106166 systemd[1]: Started iscsiuio.service. Mar 17 18:41:15.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:15.108172 systemd[1]: Starting iscsid.service... Mar 17 18:41:15.111202 iscsid[735]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:41:15.111202 iscsid[735]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:41:15.111202 iscsid[735]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:41:15.111202 iscsid[735]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:41:15.111202 iscsid[735]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:41:15.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:15.122683 ignition[726]: Ignition 2.14.0 Mar 17 18:41:15.112392 systemd[1]: Started iscsid.service. Mar 17 18:41:15.126279 iscsid[735]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:41:15.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:15.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:15.122688 ignition[726]: Stage: kargs Mar 17 18:41:15.114121 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:41:15.122791 ignition[726]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:41:15.124869 systemd[1]: Finished ignition-kargs.service. Mar 17 18:41:15.122800 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:41:15.126483 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:41:15.123604 ignition[726]: kargs: kargs passed Mar 17 18:41:15.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:15.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:15.128487 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:41:15.123636 ignition[726]: Ignition finished successfully Mar 17 18:41:15.128734 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:41:15.138036 ignition[746]: Ignition 2.14.0 Mar 17 18:41:15.128910 systemd[1]: Reached target remote-fs.target. Mar 17 18:41:15.138041 ignition[746]: Stage: disks Mar 17 18:41:15.129888 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:41:15.138122 ignition[746]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:41:15.130808 systemd[1]: Starting ignition-disks.service... Mar 17 18:41:15.138131 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:41:15.139108 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:41:15.139108 ignition[746]: disks: disks passed Mar 17 18:41:15.140488 systemd[1]: Finished ignition-disks.service. Mar 17 18:41:15.139140 ignition[746]: Ignition finished successfully Mar 17 18:41:15.141975 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:41:15.142992 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:41:15.144738 systemd[1]: Reached target local-fs.target. Mar 17 18:41:15.163305 systemd-fsck[760]: ROOT: clean, 623/553520 files, 56022/553472 blocks Mar 17 18:41:15.145648 systemd[1]: Reached target sysinit.target. Mar 17 18:41:15.147323 systemd[1]: Reached target basic.target. Mar 17 18:41:15.149174 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:41:15.168222 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:41:15.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:15.169605 systemd[1]: Mounting sysroot.mount... Mar 17 18:41:15.175946 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:41:15.176074 systemd[1]: Mounted sysroot.mount. Mar 17 18:41:15.177454 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:41:15.179812 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:41:15.181513 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 18:41:15.181549 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:41:15.181568 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:41:15.186884 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:41:15.188885 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:41:15.192889 initrd-setup-root[770]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:41:15.196335 initrd-setup-root[778]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:41:15.200373 initrd-setup-root[786]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:41:15.204175 initrd-setup-root[794]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:41:15.232411 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:41:15.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:15.233805 systemd[1]: Starting ignition-mount.service... Mar 17 18:41:15.236195 systemd[1]: Starting sysroot-boot.service... Mar 17 18:41:15.238640 bash[811]: umount: /sysroot/usr/share/oem: not mounted. Mar 17 18:41:15.245633 ignition[812]: INFO : Ignition 2.14.0 Mar 17 18:41:15.245633 ignition[812]: INFO : Stage: mount Mar 17 18:41:15.247410 ignition[812]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:41:15.247410 ignition[812]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:41:15.247410 ignition[812]: INFO : mount: mount passed Mar 17 18:41:15.247410 ignition[812]: INFO : Ignition finished successfully Mar 17 18:41:15.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:15.248112 systemd[1]: Finished ignition-mount.service. Mar 17 18:41:15.258355 systemd[1]: Finished sysroot-boot.service. Mar 17 18:41:15.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:15.900684 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:41:15.908741 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (821) Mar 17 18:41:15.908770 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:41:15.908780 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:41:15.909542 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:41:15.913653 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:41:15.915097 systemd[1]: Starting ignition-files.service... Mar 17 18:41:15.927666 ignition[841]: INFO : Ignition 2.14.0 Mar 17 18:41:15.927666 ignition[841]: INFO : Stage: files Mar 17 18:41:15.929336 ignition[841]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:41:15.929336 ignition[841]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:41:15.929336 ignition[841]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:41:15.932845 ignition[841]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:41:15.932845 ignition[841]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:41:15.935460 ignition[841]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:41:15.936786 ignition[841]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:41:15.938497 unknown[841]: wrote ssh authorized keys file for user: core Mar 17 18:41:15.939552 ignition[841]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:41:15.940981 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 17 18:41:15.940981 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Mar 17 18:41:15.995386 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 18:41:16.098362 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 17 18:41:16.100591 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:41:16.100591 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 18:41:16.616710 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:41:16.730674 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:41:16.730674 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:41:16.734316 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:41:16.734316 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:41:16.737676 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:41:16.739368 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:41:16.741108 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:41:16.742783 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:41:16.744539 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:41:16.746329 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:41:16.748083 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:41:16.749794 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 18:41:16.752229 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 18:41:16.752229 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 18:41:16.756677 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Mar 17 18:41:16.837228 systemd-networkd[723]: eth0: Gained IPv6LL Mar 17 18:41:17.184342 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 18:41:17.792606 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 18:41:17.792606 ignition[841]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 18:41:17.796858 ignition[841]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:41:17.796858 ignition[841]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:41:17.796858 ignition[841]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 18:41:17.796858 ignition[841]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 17 18:41:17.796858 ignition[841]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:41:17.796858 ignition[841]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:41:17.796858 ignition[841]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 17 18:41:17.796858 ignition[841]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:41:17.796858 ignition[841]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:41:17.796858 ignition[841]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 18:41:17.796858 ignition[841]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:41:17.843191 ignition[841]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:41:17.844834 ignition[841]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 18:41:17.844834 ignition[841]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:41:17.844834 ignition[841]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:41:17.844834 ignition[841]: INFO : files: files passed Mar 17 18:41:17.844834 ignition[841]: INFO : Ignition finished successfully Mar 17 18:41:17.851975 systemd[1]: Finished ignition-files.service. Mar 17 18:41:17.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.853161 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:41:17.859684 kernel: kauditd_printk_skb: 24 callbacks suppressed Mar 17 18:41:17.859704 kernel: audit: type=1130 audit(1742236877.851:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.858732 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:41:17.859724 systemd[1]: Starting ignition-quench.service... Mar 17 18:41:17.867103 kernel: audit: type=1130 audit(1742236877.862:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.867143 initrd-setup-root-after-ignition[864]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Mar 17 18:41:17.860679 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:41:17.907353 initrd-setup-root-after-ignition[866]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:41:17.862722 systemd[1]: Reached target ignition-complete.target. Mar 17 18:41:17.867998 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:41:17.913073 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:41:17.913163 systemd[1]: Finished ignition-quench.service. Mar 17 18:41:17.921370 kernel: audit: type=1130 audit(1742236877.914:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.921390 kernel: audit: type=1131 audit(1742236877.914:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.915152 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:41:17.928988 kernel: audit: type=1130 audit(1742236877.921:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.929006 kernel: audit: type=1131 audit(1742236877.921:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.915214 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:41:17.922001 systemd[1]: Reached target initrd-fs.target. Mar 17 18:41:17.929376 systemd[1]: Reached target initrd.target. Mar 17 18:41:17.929715 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:41:17.930347 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:41:17.941001 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:41:17.945483 kernel: audit: type=1130 audit(1742236877.940:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.945548 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:41:17.954207 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:41:17.967338 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:41:17.967650 systemd[1]: Stopped target timers.target. Mar 17 18:41:17.970297 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:41:17.976573 kernel: audit: type=1131 audit(1742236877.971:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.970387 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:41:17.971751 systemd[1]: Stopped target initrd.target. Mar 17 18:41:17.976618 systemd[1]: Stopped target basic.target. Mar 17 18:41:17.977444 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:41:17.979068 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:41:17.980566 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:41:17.982113 systemd[1]: Stopped target remote-fs.target. Mar 17 18:41:17.983689 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:41:17.985338 systemd[1]: Stopped target sysinit.target. Mar 17 18:41:17.986946 systemd[1]: Stopped target local-fs.target. Mar 17 18:41:17.988395 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:41:17.997499 kernel: audit: type=1131 audit(1742236877.992:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.989970 systemd[1]: Stopped target swap.target. Mar 17 18:41:17.991363 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:41:18.003632 kernel: audit: type=1131 audit(1742236877.999:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.991451 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:41:18.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:17.993177 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:41:17.997563 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:41:17.997690 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:41:17.999177 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:41:17.999301 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:41:18.003771 systemd[1]: Stopped target paths.target. Mar 17 18:41:18.005181 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:41:18.010011 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:41:18.011291 systemd[1]: Stopped target slices.target. Mar 17 18:41:18.013016 systemd[1]: Stopped target sockets.target. Mar 17 18:41:18.014610 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:41:18.014686 systemd[1]: Closed iscsid.socket. Mar 17 18:41:18.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.016015 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:41:18.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.016085 systemd[1]: Closed iscsiuio.socket. Mar 17 18:41:18.017405 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:41:18.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.017504 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:41:18.019205 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:41:18.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.019293 systemd[1]: Stopped ignition-files.service. Mar 17 18:41:18.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.030435 ignition[881]: INFO : Ignition 2.14.0 Mar 17 18:41:18.030435 ignition[881]: INFO : Stage: umount Mar 17 18:41:18.030435 ignition[881]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:41:18.030435 ignition[881]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:41:18.030435 ignition[881]: INFO : umount: umount passed Mar 17 18:41:18.030435 ignition[881]: INFO : Ignition finished successfully Mar 17 18:41:18.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.021401 systemd[1]: Stopping ignition-mount.service... Mar 17 18:41:18.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.022131 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:41:18.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.022233 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:41:18.024311 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:41:18.026429 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:41:18.026571 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:41:18.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.028229 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:41:18.028369 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:41:18.031429 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:41:18.031514 systemd[1]: Stopped ignition-mount.service. Mar 17 18:41:18.032208 systemd[1]: Stopped target network.target. Mar 17 18:41:18.035319 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:41:18.035358 systemd[1]: Stopped ignition-disks.service. Mar 17 18:41:18.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.035849 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:41:18.035880 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:41:18.037883 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:41:18.037939 systemd[1]: Stopped ignition-setup.service. Mar 17 18:41:18.039419 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:41:18.056000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:41:18.042049 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:41:18.042751 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:41:18.042821 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:41:18.050273 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:41:18.050357 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:41:18.053777 systemd-networkd[723]: eth0: DHCPv6 lease lost Mar 17 18:41:18.062596 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:41:18.063094 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:41:18.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.063196 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:41:18.065050 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:41:18.065081 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:41:18.098276 systemd[1]: Stopping network-cleanup.service... Mar 17 18:41:18.098000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:41:18.098682 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:41:18.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.098729 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:41:18.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.100735 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:41:18.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.100773 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:41:18.103592 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:41:18.103628 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:41:18.104270 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:41:18.107476 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:41:18.110520 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:41:18.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.110607 systemd[1]: Stopped network-cleanup.service. Mar 17 18:41:18.114640 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:41:18.114754 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:41:18.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.117192 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:41:18.117230 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:41:18.118965 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:41:18.118994 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:41:18.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.127466 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:41:18.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.127498 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:41:18.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.129022 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:41:18.129055 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:41:18.129337 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:41:18.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.129365 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:41:18.133210 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:41:18.134822 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:41:18.134868 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:41:18.145270 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:41:18.145347 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:41:18.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.222755 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:41:18.222841 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:41:18.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.224553 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:41:18.226784 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:41:18.226827 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:41:18.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.229630 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:41:18.247309 systemd[1]: Switching root. Mar 17 18:41:18.266756 iscsid[735]: iscsid shutting down. Mar 17 18:41:18.267539 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Mar 17 18:41:18.267570 systemd-journald[197]: Journal stopped Mar 17 18:41:20.941428 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:41:20.941496 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:41:20.941513 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:41:20.941528 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:41:20.941548 kernel: SELinux: policy capability open_perms=1 Mar 17 18:41:20.941568 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:41:20.941584 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:41:20.941599 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:41:20.941613 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:41:20.941631 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:41:20.941645 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:41:20.941661 systemd[1]: Successfully loaded SELinux policy in 42.971ms. Mar 17 18:41:20.941692 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.634ms. Mar 17 18:41:20.941715 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:41:20.941732 systemd[1]: Detected virtualization kvm. Mar 17 18:41:20.941748 systemd[1]: Detected architecture x86-64. Mar 17 18:41:20.941766 systemd[1]: Detected first boot. Mar 17 18:41:20.941786 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:41:20.941802 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:41:20.941817 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:41:20.941842 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:41:20.941861 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:41:20.941879 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:41:20.941896 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:41:20.941928 systemd[1]: Stopped iscsiuio.service. Mar 17 18:41:20.941946 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:41:20.941966 systemd[1]: Stopped iscsid.service. Mar 17 18:41:20.941986 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 18:41:20.942002 systemd[1]: Stopped initrd-switch-root.service. Mar 17 18:41:20.942018 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 18:41:20.942036 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:41:20.942052 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:41:20.942069 systemd[1]: Created slice system-getty.slice. Mar 17 18:41:20.942086 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:41:20.942102 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:41:20.942118 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:41:20.942133 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:41:20.942149 systemd[1]: Created slice user.slice. Mar 17 18:41:20.942164 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:41:20.942183 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:41:20.942200 systemd[1]: Set up automount boot.automount. Mar 17 18:41:20.942215 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:41:20.942231 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 18:41:20.942249 systemd[1]: Stopped target initrd-fs.target. Mar 17 18:41:20.942264 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 18:41:20.942280 systemd[1]: Reached target integritysetup.target. Mar 17 18:41:20.942295 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:41:20.942314 systemd[1]: Reached target remote-fs.target. Mar 17 18:41:20.942329 systemd[1]: Reached target slices.target. Mar 17 18:41:20.942343 systemd[1]: Reached target swap.target. Mar 17 18:41:20.942359 systemd[1]: Reached target torcx.target. Mar 17 18:41:20.942375 systemd[1]: Reached target veritysetup.target. Mar 17 18:41:20.942390 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:41:20.942406 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:41:20.942421 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:41:20.942436 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:41:20.942455 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:41:20.942473 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:41:20.942491 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:41:20.942506 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:41:20.942521 systemd[1]: Mounting media.mount... Mar 17 18:41:20.942537 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:20.942553 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:41:20.942568 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:41:20.942585 systemd[1]: Mounting tmp.mount... Mar 17 18:41:20.942604 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:41:20.942620 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:41:20.942636 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:41:20.942651 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:41:20.942667 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:41:20.942682 systemd[1]: Starting modprobe@drm.service... Mar 17 18:41:20.942697 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:41:20.942713 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:41:20.942728 systemd[1]: Starting modprobe@loop.service... Mar 17 18:41:20.942747 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:41:20.942763 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 18:41:20.942780 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 18:41:20.942795 kernel: loop: module loaded Mar 17 18:41:20.942810 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 18:41:20.942826 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 18:41:20.942850 kernel: fuse: init (API version 7.34) Mar 17 18:41:20.942867 systemd[1]: Stopped systemd-journald.service. Mar 17 18:41:20.942882 systemd[1]: Starting systemd-journald.service... Mar 17 18:41:20.942901 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:41:20.942931 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:41:20.942951 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:41:20.942967 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:41:20.942983 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 18:41:20.942999 systemd[1]: Stopped verity-setup.service. Mar 17 18:41:20.943016 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:20.943032 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:41:20.943050 systemd-journald[1003]: Journal started Mar 17 18:41:20.943108 systemd-journald[1003]: Runtime Journal (/run/log/journal/fd93f6f0d41442589f7072a21cb31917) is 6.0M, max 48.4M, 42.4M free. Mar 17 18:41:18.329000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:41:18.687000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:41:18.687000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:41:18.687000 audit: BPF prog-id=10 op=LOAD Mar 17 18:41:18.687000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:41:18.687000 audit: BPF prog-id=11 op=LOAD Mar 17 18:41:18.687000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:41:18.715000 audit[914]: AVC avc: denied { associate } for pid=914 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:41:18.715000 audit[914]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00018f8dc a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=897 pid=914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:18.715000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:41:18.717000 audit[914]: AVC avc: denied { associate } for pid=914 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:41:18.717000 audit[914]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00018f9b5 a2=1ed a3=0 items=2 ppid=897 pid=914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:18.717000 audit: CWD cwd="/" Mar 17 18:41:18.717000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:18.717000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:18.717000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:41:20.800000 audit: BPF prog-id=12 op=LOAD Mar 17 18:41:20.800000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:41:20.800000 audit: BPF prog-id=13 op=LOAD Mar 17 18:41:20.800000 audit: BPF prog-id=14 op=LOAD Mar 17 18:41:20.800000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:41:20.800000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:41:20.801000 audit: BPF prog-id=15 op=LOAD Mar 17 18:41:20.801000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:41:20.801000 audit: BPF prog-id=16 op=LOAD Mar 17 18:41:20.801000 audit: BPF prog-id=17 op=LOAD Mar 17 18:41:20.801000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:41:20.801000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:41:20.802000 audit: BPF prog-id=18 op=LOAD Mar 17 18:41:20.802000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:41:20.802000 audit: BPF prog-id=19 op=LOAD Mar 17 18:41:20.802000 audit: BPF prog-id=20 op=LOAD Mar 17 18:41:20.802000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:41:20.802000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:41:20.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.811000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:41:20.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.921000 audit: BPF prog-id=21 op=LOAD Mar 17 18:41:20.921000 audit: BPF prog-id=22 op=LOAD Mar 17 18:41:20.921000 audit: BPF prog-id=23 op=LOAD Mar 17 18:41:20.921000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:41:20.921000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:41:20.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.939000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:41:20.939000 audit[1003]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd7698fd10 a2=4000 a3=7ffd7698fdac items=0 ppid=1 pid=1003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:20.939000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:41:20.799580 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:41:18.714974 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:41:20.799590 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 17 18:41:18.715201 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:41:20.804240 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 18:41:18.715223 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:41:18.715260 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:18Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 18:41:18.715273 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:18Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 18:41:18.715308 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:18Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 18:41:18.715324 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:18Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 18:41:18.715563 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:18Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 18:41:20.945944 systemd[1]: Started systemd-journald.service. Mar 17 18:41:20.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:18.715597 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:41:18.715608 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:41:18.715878 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 18:41:20.946202 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:41:18.715933 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 18:41:18.715956 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 18:41:18.715969 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 18:41:18.715983 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 18:41:18.715996 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 18:41:20.524616 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:20Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:41:20.524899 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:20Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:41:20.525016 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:20Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:41:20.525174 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:20Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:41:20.525223 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:20Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 18:41:20.525276 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2025-03-17T18:41:20Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 18:41:20.947185 systemd[1]: Mounted media.mount. Mar 17 18:41:20.947946 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:41:20.948807 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:41:20.949689 systemd[1]: Mounted tmp.mount. Mar 17 18:41:20.950660 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:41:20.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.951753 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:41:20.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.952804 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:41:20.953114 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:41:20.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.954206 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:41:20.954405 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:41:20.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.955449 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:41:20.955620 systemd[1]: Finished modprobe@drm.service. Mar 17 18:41:20.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.956619 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:41:20.956823 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:41:20.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.957957 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:41:20.958121 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:41:20.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.959132 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:41:20.959292 systemd[1]: Finished modprobe@loop.service. Mar 17 18:41:20.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.960387 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:41:20.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.961708 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:41:20.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.962960 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:41:20.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.964372 systemd[1]: Reached target network-pre.target. Mar 17 18:41:20.966447 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:41:20.968679 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:41:20.969794 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:41:20.971253 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:41:20.973072 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:41:20.974137 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:41:20.975107 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:41:20.976103 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:41:20.977145 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:41:20.979526 systemd-journald[1003]: Time spent on flushing to /var/log/journal/fd93f6f0d41442589f7072a21cb31917 is 14.153ms for 1171 entries. Mar 17 18:41:20.979526 systemd-journald[1003]: System Journal (/var/log/journal/fd93f6f0d41442589f7072a21cb31917) is 8.0M, max 195.6M, 187.6M free. Mar 17 18:41:21.015719 systemd-journald[1003]: Received client request to flush runtime journal. Mar 17 18:41:20.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.979318 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:41:20.983318 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:41:21.016720 udevadm[1018]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 18:41:20.984513 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:41:20.986337 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:41:20.988061 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:41:20.989268 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:41:20.991445 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:41:20.992485 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:41:20.998201 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:41:21.016432 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:41:21.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:21.423302 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:41:21.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:21.424000 audit: BPF prog-id=24 op=LOAD Mar 17 18:41:21.424000 audit: BPF prog-id=25 op=LOAD Mar 17 18:41:21.424000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:41:21.424000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:41:21.425871 systemd[1]: Starting systemd-udevd.service... Mar 17 18:41:21.441706 systemd-udevd[1020]: Using default interface naming scheme 'v252'. Mar 17 18:41:21.456587 systemd[1]: Started systemd-udevd.service. Mar 17 18:41:21.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:21.459000 audit: BPF prog-id=26 op=LOAD Mar 17 18:41:21.461520 systemd[1]: Starting systemd-networkd.service... Mar 17 18:41:21.466000 audit: BPF prog-id=27 op=LOAD Mar 17 18:41:21.466000 audit: BPF prog-id=28 op=LOAD Mar 17 18:41:21.466000 audit: BPF prog-id=29 op=LOAD Mar 17 18:41:21.468058 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:41:21.493512 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Mar 17 18:41:21.497005 systemd[1]: Started systemd-userdbd.service. Mar 17 18:41:21.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:21.518492 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:41:21.540946 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 18:41:21.543633 systemd-networkd[1031]: lo: Link UP Mar 17 18:41:21.543643 systemd-networkd[1031]: lo: Gained carrier Mar 17 18:41:21.544064 systemd-networkd[1031]: Enumeration completed Mar 17 18:41:21.544158 systemd[1]: Started systemd-networkd.service. Mar 17 18:41:21.544172 systemd-networkd[1031]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:41:21.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:21.545442 systemd-networkd[1031]: eth0: Link UP Mar 17 18:41:21.545449 systemd-networkd[1031]: eth0: Gained carrier Mar 17 18:41:21.546943 kernel: ACPI: button: Power Button [PWRF] Mar 17 18:41:21.549000 audit[1029]: AVC avc: denied { confidentiality } for pid=1029 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 18:41:21.549000 audit[1029]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55eaab596040 a1=338ac a2=7f1d16589bc5 a3=5 items=110 ppid=1020 pid=1029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:21.549000 audit: CWD cwd="/" Mar 17 18:41:21.549000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=1 name=(null) inode=15488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=2 name=(null) inode=15488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=3 name=(null) inode=15489 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=4 name=(null) inode=15488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=5 name=(null) inode=15490 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=6 name=(null) inode=15488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=7 name=(null) inode=15491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=8 name=(null) inode=15491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=9 name=(null) inode=15492 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=10 name=(null) inode=15491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=11 name=(null) inode=15493 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=12 name=(null) inode=15491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=13 name=(null) inode=15494 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=14 name=(null) inode=15491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=15 name=(null) inode=15495 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=16 name=(null) inode=15491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=17 name=(null) inode=15496 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=18 name=(null) inode=15488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=19 name=(null) inode=15497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=20 name=(null) inode=15497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=21 name=(null) inode=15498 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=22 name=(null) inode=15497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=23 name=(null) inode=15499 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=24 name=(null) inode=15497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=25 name=(null) inode=15500 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=26 name=(null) inode=15497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=27 name=(null) inode=15501 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=28 name=(null) inode=15497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=29 name=(null) inode=15502 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=30 name=(null) inode=15488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=31 name=(null) inode=15503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=32 name=(null) inode=15503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=33 name=(null) inode=15504 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=34 name=(null) inode=15503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=35 name=(null) inode=15505 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=36 name=(null) inode=15503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=37 name=(null) inode=15506 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=38 name=(null) inode=15503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=39 name=(null) inode=15507 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=40 name=(null) inode=15503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=41 name=(null) inode=15508 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=42 name=(null) inode=15488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=43 name=(null) inode=15509 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=44 name=(null) inode=15509 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=45 name=(null) inode=15510 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=46 name=(null) inode=15509 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=47 name=(null) inode=15511 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=48 name=(null) inode=15509 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=49 name=(null) inode=15512 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=50 name=(null) inode=15509 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=51 name=(null) inode=15513 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=52 name=(null) inode=15509 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=53 name=(null) inode=15514 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=55 name=(null) inode=15515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=56 name=(null) inode=15515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=57 name=(null) inode=15516 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=58 name=(null) inode=15515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=59 name=(null) inode=15517 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=60 name=(null) inode=15515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=61 name=(null) inode=15518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=62 name=(null) inode=15518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=63 name=(null) inode=15519 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=64 name=(null) inode=15518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=65 name=(null) inode=15520 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=66 name=(null) inode=15518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=67 name=(null) inode=15521 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=68 name=(null) inode=15518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=69 name=(null) inode=15522 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=70 name=(null) inode=15518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=71 name=(null) inode=15523 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=72 name=(null) inode=15515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=73 name=(null) inode=15524 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=74 name=(null) inode=15524 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=75 name=(null) inode=15525 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=76 name=(null) inode=15524 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=77 name=(null) inode=15526 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=78 name=(null) inode=15524 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=79 name=(null) inode=15527 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=80 name=(null) inode=15524 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=81 name=(null) inode=15528 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=82 name=(null) inode=15524 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=83 name=(null) inode=15529 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=84 name=(null) inode=15515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=85 name=(null) inode=15530 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=86 name=(null) inode=15530 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=87 name=(null) inode=15531 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=88 name=(null) inode=15530 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=89 name=(null) inode=15532 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=90 name=(null) inode=15530 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=91 name=(null) inode=15533 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=92 name=(null) inode=15530 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=93 name=(null) inode=15534 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=94 name=(null) inode=15530 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=95 name=(null) inode=15535 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=96 name=(null) inode=15515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=97 name=(null) inode=15536 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=98 name=(null) inode=15536 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=99 name=(null) inode=15537 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=100 name=(null) inode=15536 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=101 name=(null) inode=15538 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=102 name=(null) inode=15536 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=103 name=(null) inode=15539 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=104 name=(null) inode=15536 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=105 name=(null) inode=15540 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=106 name=(null) inode=15536 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=107 name=(null) inode=15541 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PATH item=109 name=(null) inode=15542 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:21.549000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 18:41:21.560870 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 17 18:41:21.564619 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 18:41:21.564744 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 18:41:21.564766 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 18:41:21.565014 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 18:41:21.562085 systemd-networkd[1031]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:41:21.594957 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:41:21.637447 kernel: kvm: Nested Virtualization enabled Mar 17 18:41:21.637533 kernel: SVM: kvm: Nested Paging enabled Mar 17 18:41:21.638280 kernel: SVM: Virtual VMLOAD VMSAVE supported Mar 17 18:41:21.638318 kernel: SVM: Virtual GIF supported Mar 17 18:41:21.652946 kernel: EDAC MC: Ver: 3.0.0 Mar 17 18:41:21.683261 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:41:21.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:21.685265 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:41:21.691886 lvm[1055]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:41:21.713525 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:41:21.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:21.714612 systemd[1]: Reached target cryptsetup.target. Mar 17 18:41:21.716399 systemd[1]: Starting lvm2-activation.service... Mar 17 18:41:21.719346 lvm[1056]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:41:21.745594 systemd[1]: Finished lvm2-activation.service. Mar 17 18:41:21.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:21.746613 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:41:21.747518 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:41:21.747540 systemd[1]: Reached target local-fs.target. Mar 17 18:41:21.748380 systemd[1]: Reached target machines.target. Mar 17 18:41:21.750164 systemd[1]: Starting ldconfig.service... Mar 17 18:41:21.751249 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:41:21.751298 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:21.752228 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:41:21.754432 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:41:21.757161 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:41:21.759631 systemd[1]: Starting systemd-sysext.service... Mar 17 18:41:21.761223 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1058 (bootctl) Mar 17 18:41:21.761624 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:41:21.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:21.766361 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:41:21.773201 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:41:21.776905 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:41:21.777176 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:41:21.787938 kernel: loop0: detected capacity change from 0 to 218376 Mar 17 18:41:21.798411 systemd-fsck[1066]: fsck.fat 4.2 (2021-01-31) Mar 17 18:41:21.798411 systemd-fsck[1066]: /dev/vda1: 790 files, 119319/258078 clusters Mar 17 18:41:21.799795 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:41:21.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:21.802438 systemd[1]: Mounting boot.mount... Mar 17 18:41:22.017434 systemd[1]: Mounted boot.mount. Mar 17 18:41:22.023961 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:41:22.027879 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:41:22.028510 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:41:22.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:22.032375 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:41:22.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:22.039942 kernel: loop1: detected capacity change from 0 to 218376 Mar 17 18:41:22.043759 (sd-sysext)[1071]: Using extensions 'kubernetes'. Mar 17 18:41:22.044159 (sd-sysext)[1071]: Merged extensions into '/usr'. Mar 17 18:41:22.059644 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:22.061041 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:41:22.062097 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:41:22.063330 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:41:22.065448 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:41:22.067322 systemd[1]: Starting modprobe@loop.service... Mar 17 18:41:22.068187 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:41:22.068297 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:22.068399 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:22.070898 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:41:22.072103 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:41:22.072221 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:41:22.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:22.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:22.073595 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:41:22.073704 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:41:22.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:22.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:22.075088 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:41:22.075193 systemd[1]: Finished modprobe@loop.service. Mar 17 18:41:22.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:22.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:22.076606 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:41:22.076774 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:41:22.077623 systemd[1]: Finished systemd-sysext.service. Mar 17 18:41:22.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:22.079810 systemd[1]: Starting ensure-sysext.service... Mar 17 18:41:22.081588 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:41:22.086247 systemd[1]: Reloading. Mar 17 18:41:22.091891 ldconfig[1057]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:41:22.093338 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:41:22.096206 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:41:22.099696 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:41:22.134342 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-03-17T18:41:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:41:22.134376 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-03-17T18:41:22Z" level=info msg="torcx already run" Mar 17 18:41:22.194188 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:41:22.194207 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:41:22.210955 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:41:22.261000 audit: BPF prog-id=30 op=LOAD Mar 17 18:41:22.261000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:41:22.261000 audit: BPF prog-id=31 op=LOAD Mar 17 18:41:22.262000 audit: BPF prog-id=32 op=LOAD Mar 17 18:41:22.262000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:41:22.262000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:41:22.263000 audit: BPF prog-id=33 op=LOAD Mar 17 18:41:22.263000 audit: BPF prog-id=34 op=LOAD Mar 17 18:41:22.263000 audit: BPF prog-id=24 op=UNLOAD Mar 17 18:41:22.263000 audit: BPF prog-id=25 op=UNLOAD Mar 17 18:41:22.264000 audit: BPF prog-id=35 op=LOAD Mar 17 18:41:22.264000 audit: BPF prog-id=26 op=UNLOAD Mar 17 18:41:22.265000 audit: BPF prog-id=36 op=LOAD Mar 17 18:41:22.265000 audit: BPF prog-id=27 op=UNLOAD Mar 17 18:41:22.265000 audit: BPF prog-id=37 op=LOAD Mar 17 18:41:22.265000 audit: BPF prog-id=38 op=LOAD Mar 17 18:41:22.265000 audit: BPF prog-id=28 op=UNLOAD Mar 17 18:41:22.265000 audit: BPF prog-id=29 op=UNLOAD Mar 17 18:41:22.268691 systemd[1]: Finished ldconfig.service. Mar 17 18:41:22.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:22.270536 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:41:22.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:22.273992 systemd[1]: Starting audit-rules.service... Mar 17 18:41:22.275667 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:41:22.277885 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:41:22.278000 audit: BPF prog-id=39 op=LOAD Mar 17 18:41:22.281000 audit: BPF prog-id=40 op=LOAD Mar 17 18:41:22.280423 systemd[1]: Starting systemd-resolved.service... Mar 17 18:41:22.282802 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:41:22.284552 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:41:22.286386 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:41:22.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:22.289190 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:41:22.290000 audit[1151]: SYSTEM_BOOT pid=1151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:41:22.293243 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:22.293461 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:41:22.294859 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:41:22.296709 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:41:22.298979 systemd[1]: Starting modprobe@loop.service... Mar 17 18:41:22.300464 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:41:22.300632 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:22.300775 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:41:22.300909 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:22.302328 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:41:22.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:22.304007 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:41:22.304116 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:41:22.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:22.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:22.305510 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:41:22.305606 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:41:22.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:22.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:22.307207 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:41:22.307308 systemd[1]: Finished modprobe@loop.service. Mar 17 18:41:22.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:22.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:22.309338 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:41:22.309000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:41:22.309000 audit[1163]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff21d7f870 a2=420 a3=0 items=0 ppid=1140 pid=1163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:22.309000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:41:22.310217 augenrules[1163]: No rules Mar 17 18:41:22.310710 systemd[1]: Finished audit-rules.service. Mar 17 18:41:22.312518 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:41:22.312609 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:41:22.313775 systemd[1]: Starting systemd-update-done.service... Mar 17 18:41:22.316283 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:22.316455 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:41:22.317480 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:41:22.319212 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:41:22.320957 systemd[1]: Starting modprobe@loop.service... Mar 17 18:41:22.321737 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:41:22.321844 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:22.321934 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:41:22.321996 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:22.322903 systemd[1]: Finished systemd-update-done.service. Mar 17 18:41:22.324419 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:41:22.324559 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:41:22.326040 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:41:22.326209 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:41:22.327628 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:41:22.327744 systemd[1]: Finished modprobe@loop.service. Mar 17 18:41:22.328975 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:41:22.329056 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:41:22.331454 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:22.331652 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:41:22.332749 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:41:22.334582 systemd[1]: Starting modprobe@drm.service... Mar 17 18:41:22.336244 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:41:22.337907 systemd[1]: Starting modprobe@loop.service... Mar 17 18:41:22.338984 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:41:22.802284 systemd-timesyncd[1150]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 18:41:22.802303 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:22.802330 systemd-timesyncd[1150]: Initial clock synchronization to Mon 2025-03-17 18:41:22.802217 UTC. Mar 17 18:41:22.803314 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:41:22.803481 systemd-resolved[1147]: Positive Trust Anchors: Mar 17 18:41:22.803489 systemd-resolved[1147]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:41:22.803531 systemd-resolved[1147]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:41:22.804523 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:41:22.804615 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:22.805443 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:41:22.807216 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:41:22.807324 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:41:22.808612 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:41:22.808710 systemd[1]: Finished modprobe@drm.service. Mar 17 18:41:22.809936 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:41:22.810046 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:41:22.811318 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:41:22.811413 systemd[1]: Finished modprobe@loop.service. Mar 17 18:41:22.811419 systemd-resolved[1147]: Defaulting to hostname 'linux'. Mar 17 18:41:22.812790 systemd[1]: Reached target time-set.target. Mar 17 18:41:22.814123 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:41:22.814165 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:41:22.814263 systemd[1]: Started systemd-resolved.service. Mar 17 18:41:22.815341 systemd[1]: Finished ensure-sysext.service. Mar 17 18:41:22.816940 systemd[1]: Reached target network.target. Mar 17 18:41:22.817763 systemd[1]: Reached target nss-lookup.target. Mar 17 18:41:22.818651 systemd[1]: Reached target sysinit.target. Mar 17 18:41:22.819584 systemd[1]: Started motdgen.path. Mar 17 18:41:22.820337 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:41:22.821586 systemd[1]: Started logrotate.timer. Mar 17 18:41:22.822431 systemd[1]: Started mdadm.timer. Mar 17 18:41:22.823159 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:41:22.824029 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:41:22.824050 systemd[1]: Reached target paths.target. Mar 17 18:41:22.824866 systemd[1]: Reached target timers.target. Mar 17 18:41:22.825913 systemd[1]: Listening on dbus.socket. Mar 17 18:41:22.827536 systemd[1]: Starting docker.socket... Mar 17 18:41:22.830002 systemd[1]: Listening on sshd.socket. Mar 17 18:41:22.830902 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:22.831239 systemd[1]: Listening on docker.socket. Mar 17 18:41:22.832090 systemd[1]: Reached target sockets.target. Mar 17 18:41:22.832942 systemd[1]: Reached target basic.target. Mar 17 18:41:22.833762 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:41:22.833786 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:41:22.834543 systemd[1]: Starting containerd.service... Mar 17 18:41:22.836035 systemd[1]: Starting dbus.service... Mar 17 18:41:22.837630 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:41:22.839411 systemd[1]: Starting extend-filesystems.service... Mar 17 18:41:22.840474 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:41:22.847966 jq[1182]: false Mar 17 18:41:22.841382 systemd[1]: Starting motdgen.service... Mar 17 18:41:22.842904 systemd[1]: Starting prepare-helm.service... Mar 17 18:41:22.844638 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:41:22.846486 systemd[1]: Starting sshd-keygen.service... Mar 17 18:41:22.849536 systemd[1]: Starting systemd-logind.service... Mar 17 18:41:22.860237 extend-filesystems[1183]: Found loop1 Mar 17 18:41:22.860237 extend-filesystems[1183]: Found sr0 Mar 17 18:41:22.860237 extend-filesystems[1183]: Found vda Mar 17 18:41:22.860237 extend-filesystems[1183]: Found vda1 Mar 17 18:41:22.860237 extend-filesystems[1183]: Found vda2 Mar 17 18:41:22.860237 extend-filesystems[1183]: Found vda3 Mar 17 18:41:22.860237 extend-filesystems[1183]: Found usr Mar 17 18:41:22.860237 extend-filesystems[1183]: Found vda4 Mar 17 18:41:22.860237 extend-filesystems[1183]: Found vda6 Mar 17 18:41:22.860237 extend-filesystems[1183]: Found vda7 Mar 17 18:41:22.860237 extend-filesystems[1183]: Found vda9 Mar 17 18:41:22.860237 extend-filesystems[1183]: Checking size of /dev/vda9 Mar 17 18:41:22.904665 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 18:41:22.851056 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:22.904909 extend-filesystems[1183]: Resized partition /dev/vda9 Mar 17 18:41:22.872278 dbus-daemon[1181]: [system] SELinux support is enabled Mar 17 18:41:22.909744 update_engine[1196]: I0317 18:41:22.885424 1196 main.cc:92] Flatcar Update Engine starting Mar 17 18:41:22.909744 update_engine[1196]: I0317 18:41:22.893537 1196 update_check_scheduler.cc:74] Next update check in 7m33s Mar 17 18:41:22.851117 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:41:22.910096 extend-filesystems[1221]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 18:41:22.913962 jq[1198]: true Mar 17 18:41:22.851487 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 18:41:22.914258 tar[1204]: linux-amd64/LICENSE Mar 17 18:41:22.914258 tar[1204]: linux-amd64/helm Mar 17 18:41:22.852212 systemd[1]: Starting update-engine.service... Mar 17 18:41:22.914524 jq[1207]: true Mar 17 18:41:22.853960 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:41:22.858696 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:41:22.915046 env[1208]: time="2025-03-17T18:41:22.905105068Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:41:22.858846 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:41:22.859659 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:41:22.859800 systemd[1]: Finished motdgen.service. Mar 17 18:41:22.860910 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:41:22.861045 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:41:22.872419 systemd[1]: Started dbus.service. Mar 17 18:41:22.874915 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:41:22.874933 systemd[1]: Reached target system-config.target. Mar 17 18:41:22.875852 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:41:22.875869 systemd[1]: Reached target user-config.target. Mar 17 18:41:22.895058 systemd[1]: Started update-engine.service. Mar 17 18:41:22.897667 systemd[1]: Started locksmithd.service. Mar 17 18:41:22.920802 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 18:41:22.935067 env[1208]: time="2025-03-17T18:41:22.935001896Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:41:22.945080 extend-filesystems[1221]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 18:41:22.945080 extend-filesystems[1221]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 18:41:22.945080 extend-filesystems[1221]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 18:41:22.952832 extend-filesystems[1183]: Resized filesystem in /dev/vda9 Mar 17 18:41:22.945159 systemd-logind[1193]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 18:41:22.954441 bash[1233]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:41:22.945178 systemd-logind[1193]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 18:41:22.945902 systemd-logind[1193]: New seat seat0. Mar 17 18:41:22.946061 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:41:22.946289 systemd[1]: Finished extend-filesystems.service. Mar 17 18:41:22.947889 systemd[1]: Started systemd-logind.service. Mar 17 18:41:22.954989 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:41:22.955816 env[1208]: time="2025-03-17T18:41:22.944963888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:22.957496 env[1208]: time="2025-03-17T18:41:22.957467336Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:41:22.957575 env[1208]: time="2025-03-17T18:41:22.957556013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:22.957884 env[1208]: time="2025-03-17T18:41:22.957864150Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:41:22.957962 env[1208]: time="2025-03-17T18:41:22.957942808Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:22.958058 env[1208]: time="2025-03-17T18:41:22.958036944Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:41:22.958176 env[1208]: time="2025-03-17T18:41:22.958121172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:22.958315 env[1208]: time="2025-03-17T18:41:22.958297403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:22.958598 env[1208]: time="2025-03-17T18:41:22.958575525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:22.958818 env[1208]: time="2025-03-17T18:41:22.958791139Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:41:22.958908 env[1208]: time="2025-03-17T18:41:22.958884574Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:41:22.959056 env[1208]: time="2025-03-17T18:41:22.959035337Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:41:22.959159 env[1208]: time="2025-03-17T18:41:22.959123863Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:41:22.964693 env[1208]: time="2025-03-17T18:41:22.964658442Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:41:22.964805 env[1208]: time="2025-03-17T18:41:22.964783527Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:41:22.964893 env[1208]: time="2025-03-17T18:41:22.964871902Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:41:22.964996 env[1208]: time="2025-03-17T18:41:22.964976148Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:41:22.965132 env[1208]: time="2025-03-17T18:41:22.965113626Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:41:22.965251 env[1208]: time="2025-03-17T18:41:22.965229974Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:41:22.965343 env[1208]: time="2025-03-17T18:41:22.965322818Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:41:22.965431 env[1208]: time="2025-03-17T18:41:22.965411775Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:41:22.965521 env[1208]: time="2025-03-17T18:41:22.965501964Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:41:22.965609 env[1208]: time="2025-03-17T18:41:22.965587284Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:41:22.965697 env[1208]: time="2025-03-17T18:41:22.965675680Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:41:22.965784 env[1208]: time="2025-03-17T18:41:22.965763414Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:41:22.965951 env[1208]: time="2025-03-17T18:41:22.965933303Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:41:22.966112 env[1208]: time="2025-03-17T18:41:22.966091510Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:41:22.966497 env[1208]: time="2025-03-17T18:41:22.966472534Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:41:22.966608 env[1208]: time="2025-03-17T18:41:22.966586598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:41:22.966703 env[1208]: time="2025-03-17T18:41:22.966680124Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:41:22.966804 locksmithd[1226]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:41:22.967098 env[1208]: time="2025-03-17T18:41:22.967077208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:41:22.967219 env[1208]: time="2025-03-17T18:41:22.967196191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:41:22.967325 env[1208]: time="2025-03-17T18:41:22.967303152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:41:22.967421 env[1208]: time="2025-03-17T18:41:22.967398270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:41:22.967516 env[1208]: time="2025-03-17T18:41:22.967492918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:41:22.967611 env[1208]: time="2025-03-17T18:41:22.967587976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:41:22.967706 env[1208]: time="2025-03-17T18:41:22.967683866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:41:22.967798 env[1208]: time="2025-03-17T18:41:22.967777431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:41:22.967902 env[1208]: time="2025-03-17T18:41:22.967881256Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:41:22.968171 env[1208]: time="2025-03-17T18:41:22.968128049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:41:22.968266 env[1208]: time="2025-03-17T18:41:22.968244778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:41:22.968354 env[1208]: time="2025-03-17T18:41:22.968335458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:41:22.968432 env[1208]: time="2025-03-17T18:41:22.968413905Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:41:22.968515 env[1208]: time="2025-03-17T18:41:22.968494105Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:41:22.968591 env[1208]: time="2025-03-17T18:41:22.968569817Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:41:22.968688 env[1208]: time="2025-03-17T18:41:22.968668993Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:41:22.968790 env[1208]: time="2025-03-17T18:41:22.968770173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:41:22.969193 env[1208]: time="2025-03-17T18:41:22.969100142Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:41:22.969784 env[1208]: time="2025-03-17T18:41:22.969335343Z" level=info msg="Connect containerd service" Mar 17 18:41:22.969784 env[1208]: time="2025-03-17T18:41:22.969402519Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:41:22.970391 env[1208]: time="2025-03-17T18:41:22.970362379Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:41:22.970580 env[1208]: time="2025-03-17T18:41:22.970549841Z" level=info msg="Start subscribing containerd event" Mar 17 18:41:22.970690 env[1208]: time="2025-03-17T18:41:22.970673082Z" level=info msg="Start recovering state" Mar 17 18:41:22.970862 env[1208]: time="2025-03-17T18:41:22.970845235Z" level=info msg="Start event monitor" Mar 17 18:41:22.970940 env[1208]: time="2025-03-17T18:41:22.970924092Z" level=info msg="Start snapshots syncer" Mar 17 18:41:22.971041 env[1208]: time="2025-03-17T18:41:22.971025062Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:41:22.971180 env[1208]: time="2025-03-17T18:41:22.971160576Z" level=info msg="Start streaming server" Mar 17 18:41:22.971515 env[1208]: time="2025-03-17T18:41:22.971498169Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:41:22.971628 env[1208]: time="2025-03-17T18:41:22.971609758Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:41:22.971843 systemd[1]: Started containerd.service. Mar 17 18:41:22.981622 env[1208]: time="2025-03-17T18:41:22.981568084Z" level=info msg="containerd successfully booted in 0.072978s" Mar 17 18:41:23.124301 systemd-networkd[1031]: eth0: Gained IPv6LL Mar 17 18:41:23.126522 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:41:23.128100 systemd[1]: Reached target network-online.target. Mar 17 18:41:23.130790 systemd[1]: Starting kubelet.service... Mar 17 18:41:23.301948 tar[1204]: linux-amd64/README.md Mar 17 18:41:23.306319 systemd[1]: Finished prepare-helm.service. Mar 17 18:41:23.576168 sshd_keygen[1202]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:41:23.594547 systemd[1]: Finished sshd-keygen.service. Mar 17 18:41:23.596835 systemd[1]: Starting issuegen.service... Mar 17 18:41:23.601980 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:41:23.602246 systemd[1]: Finished issuegen.service. Mar 17 18:41:23.604505 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:41:23.610323 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:41:23.612772 systemd[1]: Started getty@tty1.service. Mar 17 18:41:23.614654 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 18:41:23.615710 systemd[1]: Reached target getty.target. Mar 17 18:41:23.769649 systemd[1]: Started kubelet.service. Mar 17 18:41:23.771055 systemd[1]: Reached target multi-user.target. Mar 17 18:41:23.773399 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:41:23.781025 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:41:23.781227 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:41:23.782393 systemd[1]: Startup finished in 882ms (kernel) + 5.425s (initrd) + 5.033s (userspace) = 11.342s. Mar 17 18:41:24.153239 kubelet[1263]: E0317 18:41:24.153181 1263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:41:24.154706 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:41:24.154830 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:41:26.196064 systemd[1]: Created slice system-sshd.slice. Mar 17 18:41:26.197384 systemd[1]: Started sshd@0-10.0.0.88:22-10.0.0.1:48372.service. Mar 17 18:41:26.232617 sshd[1272]: Accepted publickey for core from 10.0.0.1 port 48372 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:41:26.234465 sshd[1272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:26.242119 systemd[1]: Created slice user-500.slice. Mar 17 18:41:26.243113 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:41:26.244672 systemd-logind[1193]: New session 1 of user core. Mar 17 18:41:26.251317 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:41:26.252468 systemd[1]: Starting user@500.service... Mar 17 18:41:26.255891 (systemd)[1275]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:26.330459 systemd[1275]: Queued start job for default target default.target. Mar 17 18:41:26.330939 systemd[1275]: Reached target paths.target. Mar 17 18:41:26.330960 systemd[1275]: Reached target sockets.target. Mar 17 18:41:26.330973 systemd[1275]: Reached target timers.target. Mar 17 18:41:26.330983 systemd[1275]: Reached target basic.target. Mar 17 18:41:26.331022 systemd[1275]: Reached target default.target. Mar 17 18:41:26.331045 systemd[1275]: Startup finished in 68ms. Mar 17 18:41:26.331149 systemd[1]: Started user@500.service. Mar 17 18:41:26.332089 systemd[1]: Started session-1.scope. Mar 17 18:41:26.383297 systemd[1]: Started sshd@1-10.0.0.88:22-10.0.0.1:48374.service. Mar 17 18:41:26.416460 sshd[1284]: Accepted publickey for core from 10.0.0.1 port 48374 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:41:26.417613 sshd[1284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:26.421565 systemd-logind[1193]: New session 2 of user core. Mar 17 18:41:26.422557 systemd[1]: Started session-2.scope. Mar 17 18:41:26.479495 sshd[1284]: pam_unix(sshd:session): session closed for user core Mar 17 18:41:26.482976 systemd[1]: sshd@1-10.0.0.88:22-10.0.0.1:48374.service: Deactivated successfully. Mar 17 18:41:26.483683 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:41:26.484337 systemd-logind[1193]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:41:26.485734 systemd[1]: Started sshd@2-10.0.0.88:22-10.0.0.1:48378.service. Mar 17 18:41:26.486581 systemd-logind[1193]: Removed session 2. Mar 17 18:41:26.515990 sshd[1290]: Accepted publickey for core from 10.0.0.1 port 48378 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:41:26.517057 sshd[1290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:26.520824 systemd-logind[1193]: New session 3 of user core. Mar 17 18:41:26.521638 systemd[1]: Started session-3.scope. Mar 17 18:41:26.572467 sshd[1290]: pam_unix(sshd:session): session closed for user core Mar 17 18:41:26.575698 systemd[1]: sshd@2-10.0.0.88:22-10.0.0.1:48378.service: Deactivated successfully. Mar 17 18:41:26.576362 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:41:26.576940 systemd-logind[1193]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:41:26.578249 systemd[1]: Started sshd@3-10.0.0.88:22-10.0.0.1:48384.service. Mar 17 18:41:26.579105 systemd-logind[1193]: Removed session 3. Mar 17 18:41:26.609082 sshd[1296]: Accepted publickey for core from 10.0.0.1 port 48384 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:41:26.610325 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:26.613903 systemd-logind[1193]: New session 4 of user core. Mar 17 18:41:26.614637 systemd[1]: Started session-4.scope. Mar 17 18:41:26.668426 sshd[1296]: pam_unix(sshd:session): session closed for user core Mar 17 18:41:26.671247 systemd[1]: sshd@3-10.0.0.88:22-10.0.0.1:48384.service: Deactivated successfully. Mar 17 18:41:26.671798 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:41:26.672313 systemd-logind[1193]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:41:26.673388 systemd[1]: Started sshd@4-10.0.0.88:22-10.0.0.1:48386.service. Mar 17 18:41:26.673987 systemd-logind[1193]: Removed session 4. Mar 17 18:41:26.701828 sshd[1302]: Accepted publickey for core from 10.0.0.1 port 48386 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:41:26.702840 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:26.706524 systemd-logind[1193]: New session 5 of user core. Mar 17 18:41:26.707303 systemd[1]: Started session-5.scope. Mar 17 18:41:26.761909 sudo[1305]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:41:26.762112 sudo[1305]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:41:26.784839 systemd[1]: Starting docker.service... Mar 17 18:41:26.823593 env[1317]: time="2025-03-17T18:41:26.823532008Z" level=info msg="Starting up" Mar 17 18:41:26.825028 env[1317]: time="2025-03-17T18:41:26.824983189Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:41:26.825028 env[1317]: time="2025-03-17T18:41:26.825010280Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:41:26.825095 env[1317]: time="2025-03-17T18:41:26.825033634Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:41:26.825095 env[1317]: time="2025-03-17T18:41:26.825042911Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:41:26.826768 env[1317]: time="2025-03-17T18:41:26.826732850Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:41:26.826768 env[1317]: time="2025-03-17T18:41:26.826759430Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:41:26.826838 env[1317]: time="2025-03-17T18:41:26.826779568Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:41:26.826838 env[1317]: time="2025-03-17T18:41:26.826820835Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:41:27.535468 env[1317]: time="2025-03-17T18:41:27.535417397Z" level=info msg="Loading containers: start." Mar 17 18:41:27.650180 kernel: Initializing XFRM netlink socket Mar 17 18:41:27.682408 env[1317]: time="2025-03-17T18:41:27.682358353Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:41:27.731592 systemd-networkd[1031]: docker0: Link UP Mar 17 18:41:27.747329 env[1317]: time="2025-03-17T18:41:27.747283030Z" level=info msg="Loading containers: done." Mar 17 18:41:27.761320 env[1317]: time="2025-03-17T18:41:27.761271364Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:41:27.761485 env[1317]: time="2025-03-17T18:41:27.761469044Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:41:27.761580 env[1317]: time="2025-03-17T18:41:27.761558652Z" level=info msg="Daemon has completed initialization" Mar 17 18:41:27.778088 systemd[1]: Started docker.service. Mar 17 18:41:27.786171 env[1317]: time="2025-03-17T18:41:27.785778814Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:41:28.435469 env[1208]: time="2025-03-17T18:41:28.435415845Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 17 18:41:28.990016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1079439550.mount: Deactivated successfully. Mar 17 18:41:30.456888 env[1208]: time="2025-03-17T18:41:30.456781197Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:30.458796 env[1208]: time="2025-03-17T18:41:30.458724471Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:30.460498 env[1208]: time="2025-03-17T18:41:30.460448694Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:30.462475 env[1208]: time="2025-03-17T18:41:30.462441502Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:30.463537 env[1208]: time="2025-03-17T18:41:30.463440796Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\"" Mar 17 18:41:30.464854 env[1208]: time="2025-03-17T18:41:30.464824952Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 17 18:41:32.084862 env[1208]: time="2025-03-17T18:41:32.084806655Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:32.086946 env[1208]: time="2025-03-17T18:41:32.086897396Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:32.088623 env[1208]: time="2025-03-17T18:41:32.088571796Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:32.090573 env[1208]: time="2025-03-17T18:41:32.090544436Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:32.091457 env[1208]: time="2025-03-17T18:41:32.091411211Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\"" Mar 17 18:41:32.091940 env[1208]: time="2025-03-17T18:41:32.091894708Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 17 18:41:34.010839 env[1208]: time="2025-03-17T18:41:34.010771487Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:34.013086 env[1208]: time="2025-03-17T18:41:34.013026706Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:34.015037 env[1208]: time="2025-03-17T18:41:34.014993665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:34.016779 env[1208]: time="2025-03-17T18:41:34.016743116Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:34.017504 env[1208]: time="2025-03-17T18:41:34.017443930Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\"" Mar 17 18:41:34.017989 env[1208]: time="2025-03-17T18:41:34.017940041Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 17 18:41:34.406002 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:41:34.406324 systemd[1]: Stopped kubelet.service. Mar 17 18:41:34.408595 systemd[1]: Starting kubelet.service... Mar 17 18:41:34.526217 systemd[1]: Started kubelet.service. Mar 17 18:41:34.705530 kubelet[1452]: E0317 18:41:34.705169 1452 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:41:34.708832 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:41:34.708990 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:41:35.804299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3810833718.mount: Deactivated successfully. Mar 17 18:41:36.730480 env[1208]: time="2025-03-17T18:41:36.730362591Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:36.734614 env[1208]: time="2025-03-17T18:41:36.734585501Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:36.735880 env[1208]: time="2025-03-17T18:41:36.735840073Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:36.737172 env[1208]: time="2025-03-17T18:41:36.737149599Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:36.737519 env[1208]: time="2025-03-17T18:41:36.737496470Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\"" Mar 17 18:41:36.737985 env[1208]: time="2025-03-17T18:41:36.737966712Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 17 18:41:37.222223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount566822201.mount: Deactivated successfully. Mar 17 18:41:38.244323 env[1208]: time="2025-03-17T18:41:38.244248678Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:38.246251 env[1208]: time="2025-03-17T18:41:38.246198474Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:38.248229 env[1208]: time="2025-03-17T18:41:38.248199036Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:38.249845 env[1208]: time="2025-03-17T18:41:38.249807372Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:38.250691 env[1208]: time="2025-03-17T18:41:38.250656745Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Mar 17 18:41:38.251167 env[1208]: time="2025-03-17T18:41:38.251120816Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 18:41:38.702459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1554687657.mount: Deactivated successfully. Mar 17 18:41:38.707722 env[1208]: time="2025-03-17T18:41:38.707685267Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:38.709585 env[1208]: time="2025-03-17T18:41:38.709535266Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:38.710974 env[1208]: time="2025-03-17T18:41:38.710940291Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:38.712434 env[1208]: time="2025-03-17T18:41:38.712400740Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:38.712878 env[1208]: time="2025-03-17T18:41:38.712848059Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 17 18:41:38.713346 env[1208]: time="2025-03-17T18:41:38.713317419Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 17 18:41:39.270327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount962278598.mount: Deactivated successfully. Mar 17 18:41:42.171667 env[1208]: time="2025-03-17T18:41:42.171608560Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:42.175095 env[1208]: time="2025-03-17T18:41:42.175062507Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:42.176835 env[1208]: time="2025-03-17T18:41:42.176796559Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:42.178435 env[1208]: time="2025-03-17T18:41:42.178397842Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:42.179224 env[1208]: time="2025-03-17T18:41:42.179187413Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Mar 17 18:41:44.569213 systemd[1]: Stopped kubelet.service. Mar 17 18:41:44.572084 systemd[1]: Starting kubelet.service... Mar 17 18:41:44.598438 systemd[1]: Reloading. Mar 17 18:41:44.667609 /usr/lib/systemd/system-generators/torcx-generator[1507]: time="2025-03-17T18:41:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:41:44.667953 /usr/lib/systemd/system-generators/torcx-generator[1507]: time="2025-03-17T18:41:44Z" level=info msg="torcx already run" Mar 17 18:41:45.033798 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:41:45.033817 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:41:45.053303 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:41:45.135771 systemd[1]: Started kubelet.service. Mar 17 18:41:45.137151 systemd[1]: Stopping kubelet.service... Mar 17 18:41:45.137404 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:41:45.137640 systemd[1]: Stopped kubelet.service. Mar 17 18:41:45.138960 systemd[1]: Starting kubelet.service... Mar 17 18:41:45.220738 systemd[1]: Started kubelet.service. Mar 17 18:41:45.256965 kubelet[1555]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:41:45.256965 kubelet[1555]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 18:41:45.256965 kubelet[1555]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:41:45.257406 kubelet[1555]: I0317 18:41:45.256992 1555 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:41:45.734064 kubelet[1555]: I0317 18:41:45.734014 1555 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 18:41:45.734064 kubelet[1555]: I0317 18:41:45.734053 1555 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:41:45.734480 kubelet[1555]: I0317 18:41:45.734457 1555 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 18:41:45.768776 kubelet[1555]: E0317 18:41:45.768721 1555 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:41:45.769981 kubelet[1555]: I0317 18:41:45.769944 1555 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:41:45.775758 kubelet[1555]: E0317 18:41:45.775715 1555 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:41:45.775758 kubelet[1555]: I0317 18:41:45.775738 1555 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:41:45.779670 kubelet[1555]: I0317 18:41:45.779633 1555 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:41:45.780747 kubelet[1555]: I0317 18:41:45.780705 1555 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:41:45.780913 kubelet[1555]: I0317 18:41:45.780740 1555 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:41:45.780913 kubelet[1555]: I0317 18:41:45.780913 1555 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:41:45.781078 kubelet[1555]: I0317 18:41:45.780926 1555 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 18:41:45.781078 kubelet[1555]: I0317 18:41:45.781050 1555 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:41:45.783520 kubelet[1555]: I0317 18:41:45.783498 1555 kubelet.go:446] "Attempting to sync node with API server" Mar 17 18:41:45.783520 kubelet[1555]: I0317 18:41:45.783517 1555 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:41:45.783599 kubelet[1555]: I0317 18:41:45.783533 1555 kubelet.go:352] "Adding apiserver pod source" Mar 17 18:41:45.783599 kubelet[1555]: I0317 18:41:45.783542 1555 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:41:45.794383 kubelet[1555]: I0317 18:41:45.794359 1555 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:41:45.794975 kubelet[1555]: W0317 18:41:45.794921 1555 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Mar 17 18:41:45.795054 kubelet[1555]: E0317 18:41:45.795019 1555 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:41:45.795793 kubelet[1555]: I0317 18:41:45.795768 1555 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:41:45.796319 kubelet[1555]: W0317 18:41:45.796260 1555 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Mar 17 18:41:45.796390 kubelet[1555]: E0317 18:41:45.796320 1555 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:41:45.796390 kubelet[1555]: W0317 18:41:45.796384 1555 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:41:45.803460 kubelet[1555]: I0317 18:41:45.803430 1555 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 18:41:45.803538 kubelet[1555]: I0317 18:41:45.803474 1555 server.go:1287] "Started kubelet" Mar 17 18:41:45.806199 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:41:45.806349 kubelet[1555]: I0317 18:41:45.806323 1555 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:41:45.811981 kubelet[1555]: I0317 18:41:45.811934 1555 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:41:45.812863 kubelet[1555]: I0317 18:41:45.812836 1555 server.go:490] "Adding debug handlers to kubelet server" Mar 17 18:41:45.813768 kubelet[1555]: I0317 18:41:45.813704 1555 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:41:45.813977 kubelet[1555]: I0317 18:41:45.813950 1555 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:41:45.814184 kubelet[1555]: I0317 18:41:45.814165 1555 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:41:45.816039 kubelet[1555]: I0317 18:41:45.816022 1555 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 18:41:45.816283 kubelet[1555]: E0317 18:41:45.816252 1555 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:41:45.817125 kubelet[1555]: E0317 18:41:45.816833 1555 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="200ms" Mar 17 18:41:45.817125 kubelet[1555]: I0317 18:41:45.817017 1555 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:41:45.817125 kubelet[1555]: I0317 18:41:45.817085 1555 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:41:45.818109 kubelet[1555]: E0317 18:41:45.818091 1555 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:41:45.818109 kubelet[1555]: E0317 18:41:45.816537 1555 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.88:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.88:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182dab3c2cfd7f47 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 18:41:45.803448135 +0000 UTC m=+0.578988541,LastTimestamp:2025-03-17 18:41:45.803448135 +0000 UTC m=+0.578988541,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 18:41:45.818265 kubelet[1555]: I0317 18:41:45.818169 1555 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:41:45.818265 kubelet[1555]: I0317 18:41:45.818228 1555 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:41:45.818591 kubelet[1555]: W0317 18:41:45.818550 1555 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Mar 17 18:41:45.818629 kubelet[1555]: E0317 18:41:45.818593 1555 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:41:45.819069 kubelet[1555]: I0317 18:41:45.819048 1555 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:41:45.830413 kubelet[1555]: I0317 18:41:45.829696 1555 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:41:45.830601 kubelet[1555]: I0317 18:41:45.830570 1555 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:41:45.830601 kubelet[1555]: I0317 18:41:45.830592 1555 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 18:41:45.830669 kubelet[1555]: I0317 18:41:45.830610 1555 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 18:41:45.830669 kubelet[1555]: I0317 18:41:45.830617 1555 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 18:41:45.830669 kubelet[1555]: E0317 18:41:45.830652 1555 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:41:45.832925 kubelet[1555]: I0317 18:41:45.832893 1555 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 18:41:45.832925 kubelet[1555]: I0317 18:41:45.832910 1555 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 18:41:45.832925 kubelet[1555]: I0317 18:41:45.832926 1555 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:41:45.833763 kubelet[1555]: W0317 18:41:45.833724 1555 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Mar 17 18:41:45.833824 kubelet[1555]: E0317 18:41:45.833769 1555 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:41:45.916398 kubelet[1555]: E0317 18:41:45.916379 1555 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:41:45.930742 kubelet[1555]: E0317 18:41:45.930699 1555 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:41:46.017212 kubelet[1555]: E0317 18:41:46.017044 1555 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:41:46.017490 kubelet[1555]: E0317 18:41:46.017449 1555 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="400ms" Mar 17 18:41:46.118028 kubelet[1555]: E0317 18:41:46.117982 1555 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:41:46.131422 kubelet[1555]: E0317 18:41:46.131395 1555 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:41:46.168385 kubelet[1555]: I0317 18:41:46.168355 1555 policy_none.go:49] "None policy: Start" Mar 17 18:41:46.168385 kubelet[1555]: I0317 18:41:46.168372 1555 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 18:41:46.168385 kubelet[1555]: I0317 18:41:46.168383 1555 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:41:46.174263 systemd[1]: Created slice kubepods.slice. Mar 17 18:41:46.178568 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 18:41:46.181096 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 18:41:46.186963 kubelet[1555]: I0317 18:41:46.186923 1555 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:41:46.187189 kubelet[1555]: I0317 18:41:46.187109 1555 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:41:46.187189 kubelet[1555]: I0317 18:41:46.187127 1555 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:41:46.187822 kubelet[1555]: I0317 18:41:46.187459 1555 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:41:46.188349 kubelet[1555]: E0317 18:41:46.188320 1555 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 18:41:46.188417 kubelet[1555]: E0317 18:41:46.188376 1555 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 18:41:46.289618 kubelet[1555]: I0317 18:41:46.289287 1555 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 18:41:46.290009 kubelet[1555]: E0317 18:41:46.289891 1555 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Mar 17 18:41:46.418973 kubelet[1555]: E0317 18:41:46.418924 1555 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="800ms" Mar 17 18:41:46.491263 kubelet[1555]: I0317 18:41:46.491228 1555 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 18:41:46.491687 kubelet[1555]: E0317 18:41:46.491637 1555 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Mar 17 18:41:46.541099 systemd[1]: Created slice kubepods-burstable-pod66f97343cb741ccaeaea7bda56ca63fe.slice. Mar 17 18:41:46.555228 kubelet[1555]: E0317 18:41:46.555198 1555 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 18:41:46.557573 systemd[1]: Created slice kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice. Mar 17 18:41:46.569437 kubelet[1555]: E0317 18:41:46.569405 1555 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 18:41:46.572094 systemd[1]: Created slice kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice. Mar 17 18:41:46.573635 kubelet[1555]: E0317 18:41:46.573610 1555 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 18:41:46.624195 kubelet[1555]: I0317 18:41:46.624133 1555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 17 18:41:46.624288 kubelet[1555]: I0317 18:41:46.624197 1555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66f97343cb741ccaeaea7bda56ca63fe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"66f97343cb741ccaeaea7bda56ca63fe\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:46.624288 kubelet[1555]: I0317 18:41:46.624221 1555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66f97343cb741ccaeaea7bda56ca63fe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"66f97343cb741ccaeaea7bda56ca63fe\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:46.624288 kubelet[1555]: I0317 18:41:46.624243 1555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:46.624288 kubelet[1555]: I0317 18:41:46.624258 1555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:46.624288 kubelet[1555]: I0317 18:41:46.624281 1555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66f97343cb741ccaeaea7bda56ca63fe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"66f97343cb741ccaeaea7bda56ca63fe\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:46.624419 kubelet[1555]: I0317 18:41:46.624352 1555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:46.624523 kubelet[1555]: I0317 18:41:46.624472 1555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:46.624523 kubelet[1555]: I0317 18:41:46.624526 1555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:46.696820 kubelet[1555]: W0317 18:41:46.696716 1555 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Mar 17 18:41:46.696820 kubelet[1555]: E0317 18:41:46.696793 1555 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:41:46.855657 kubelet[1555]: E0317 18:41:46.855608 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:46.856326 env[1208]: time="2025-03-17T18:41:46.856286413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:66f97343cb741ccaeaea7bda56ca63fe,Namespace:kube-system,Attempt:0,}" Mar 17 18:41:46.870477 kubelet[1555]: E0317 18:41:46.870438 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:46.870783 env[1208]: time="2025-03-17T18:41:46.870745018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,}" Mar 17 18:41:46.874198 kubelet[1555]: E0317 18:41:46.874154 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:46.874640 env[1208]: time="2025-03-17T18:41:46.874606680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,}" Mar 17 18:41:46.892791 kubelet[1555]: I0317 18:41:46.892762 1555 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 18:41:46.893100 kubelet[1555]: E0317 18:41:46.893067 1555 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Mar 17 18:41:47.051974 kubelet[1555]: W0317 18:41:47.051890 1555 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Mar 17 18:41:47.052063 kubelet[1555]: E0317 18:41:47.051975 1555 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:41:47.106830 kubelet[1555]: W0317 18:41:47.106694 1555 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Mar 17 18:41:47.106830 kubelet[1555]: E0317 18:41:47.106760 1555 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:41:47.219799 kubelet[1555]: E0317 18:41:47.219739 1555 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="1.6s" Mar 17 18:41:47.340269 kubelet[1555]: W0317 18:41:47.340200 1555 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Mar 17 18:41:47.340517 kubelet[1555]: E0317 18:41:47.340277 1555 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:41:47.694727 kubelet[1555]: I0317 18:41:47.694703 1555 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 18:41:47.695041 kubelet[1555]: E0317 18:41:47.695001 1555 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Mar 17 18:41:47.790162 kubelet[1555]: E0317 18:41:47.790076 1555 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:41:48.135411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount851867588.mount: Deactivated successfully. Mar 17 18:41:48.140851 env[1208]: time="2025-03-17T18:41:48.140789695Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:48.143484 env[1208]: time="2025-03-17T18:41:48.143440526Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:48.144611 env[1208]: time="2025-03-17T18:41:48.144564173Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:48.146339 env[1208]: time="2025-03-17T18:41:48.146304728Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:48.148110 env[1208]: time="2025-03-17T18:41:48.148072633Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:48.149297 env[1208]: time="2025-03-17T18:41:48.149271371Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:48.150425 env[1208]: time="2025-03-17T18:41:48.150391221Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:48.151769 env[1208]: time="2025-03-17T18:41:48.151746002Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:48.153265 env[1208]: time="2025-03-17T18:41:48.153238120Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:48.155106 env[1208]: time="2025-03-17T18:41:48.155067421Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:48.156412 env[1208]: time="2025-03-17T18:41:48.156385753Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:48.157793 env[1208]: time="2025-03-17T18:41:48.157758968Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:48.174672 env[1208]: time="2025-03-17T18:41:48.174595363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:41:48.174672 env[1208]: time="2025-03-17T18:41:48.174636309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:41:48.174672 env[1208]: time="2025-03-17T18:41:48.174645647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:41:48.174964 env[1208]: time="2025-03-17T18:41:48.174808492Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee46524393958d36232062dd44cd358c63ce28fbed8ed61dbc1eb47303da6bc4 pid=1598 runtime=io.containerd.runc.v2 Mar 17 18:41:48.191959 systemd[1]: Started cri-containerd-ee46524393958d36232062dd44cd358c63ce28fbed8ed61dbc1eb47303da6bc4.scope. Mar 17 18:41:48.197401 env[1208]: time="2025-03-17T18:41:48.195850924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:41:48.197401 env[1208]: time="2025-03-17T18:41:48.195872385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:41:48.197401 env[1208]: time="2025-03-17T18:41:48.195881161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:41:48.197401 env[1208]: time="2025-03-17T18:41:48.196030531Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/49700ee6817c1bd7c688f6be514d69aa1d637fa693767f75e0d3d3ae53074ec9 pid=1635 runtime=io.containerd.runc.v2 Mar 17 18:41:48.197401 env[1208]: time="2025-03-17T18:41:48.195063488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:41:48.197401 env[1208]: time="2025-03-17T18:41:48.195111618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:41:48.197401 env[1208]: time="2025-03-17T18:41:48.195124011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:41:48.197401 env[1208]: time="2025-03-17T18:41:48.195297787Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/629657abb6c67b080c6f1dbf3e3e3a9494c7c9c5c0ee161be4fc7d2507b759b8 pid=1625 runtime=io.containerd.runc.v2 Mar 17 18:41:48.215734 systemd[1]: Started cri-containerd-629657abb6c67b080c6f1dbf3e3e3a9494c7c9c5c0ee161be4fc7d2507b759b8.scope. Mar 17 18:41:48.221491 systemd[1]: Started cri-containerd-49700ee6817c1bd7c688f6be514d69aa1d637fa693767f75e0d3d3ae53074ec9.scope. Mar 17 18:41:48.238373 env[1208]: time="2025-03-17T18:41:48.238312635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee46524393958d36232062dd44cd358c63ce28fbed8ed61dbc1eb47303da6bc4\"" Mar 17 18:41:48.239364 kubelet[1555]: E0317 18:41:48.239337 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:48.241265 env[1208]: time="2025-03-17T18:41:48.241234214Z" level=info msg="CreateContainer within sandbox \"ee46524393958d36232062dd44cd358c63ce28fbed8ed61dbc1eb47303da6bc4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:41:48.255369 env[1208]: time="2025-03-17T18:41:48.254149936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:66f97343cb741ccaeaea7bda56ca63fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"49700ee6817c1bd7c688f6be514d69aa1d637fa693767f75e0d3d3ae53074ec9\"" Mar 17 18:41:48.255524 kubelet[1555]: E0317 18:41:48.255189 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:48.256603 env[1208]: time="2025-03-17T18:41:48.256580163Z" level=info msg="CreateContainer within sandbox \"49700ee6817c1bd7c688f6be514d69aa1d637fa693767f75e0d3d3ae53074ec9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:41:48.261164 env[1208]: time="2025-03-17T18:41:48.261120809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"629657abb6c67b080c6f1dbf3e3e3a9494c7c9c5c0ee161be4fc7d2507b759b8\"" Mar 17 18:41:48.261902 kubelet[1555]: E0317 18:41:48.261763 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:48.263206 env[1208]: time="2025-03-17T18:41:48.263180151Z" level=info msg="CreateContainer within sandbox \"629657abb6c67b080c6f1dbf3e3e3a9494c7c9c5c0ee161be4fc7d2507b759b8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:41:48.263787 env[1208]: time="2025-03-17T18:41:48.263755610Z" level=info msg="CreateContainer within sandbox \"ee46524393958d36232062dd44cd358c63ce28fbed8ed61dbc1eb47303da6bc4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fb7a0a67fab9753c352da208fd7ed548011cac113ae7b2c03f0148e7c9a735ad\"" Mar 17 18:41:48.264165 env[1208]: time="2025-03-17T18:41:48.264132386Z" level=info msg="StartContainer for \"fb7a0a67fab9753c352da208fd7ed548011cac113ae7b2c03f0148e7c9a735ad\"" Mar 17 18:41:48.278112 systemd[1]: Started cri-containerd-fb7a0a67fab9753c352da208fd7ed548011cac113ae7b2c03f0148e7c9a735ad.scope. Mar 17 18:41:48.279924 env[1208]: time="2025-03-17T18:41:48.279888645Z" level=info msg="CreateContainer within sandbox \"49700ee6817c1bd7c688f6be514d69aa1d637fa693767f75e0d3d3ae53074ec9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"be75fda21c8c1ea37bd6dc1ea1f0fa3de01b594c9105c8e9fdf66271d4ea8811\"" Mar 17 18:41:48.280751 env[1208]: time="2025-03-17T18:41:48.280716668Z" level=info msg="StartContainer for \"be75fda21c8c1ea37bd6dc1ea1f0fa3de01b594c9105c8e9fdf66271d4ea8811\"" Mar 17 18:41:48.288619 env[1208]: time="2025-03-17T18:41:48.288560758Z" level=info msg="CreateContainer within sandbox \"629657abb6c67b080c6f1dbf3e3e3a9494c7c9c5c0ee161be4fc7d2507b759b8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b131f8a9a01af599e7d240b0f45e4a64f0a2043059a7a7df0a69143ece0ec30a\"" Mar 17 18:41:48.289244 env[1208]: time="2025-03-17T18:41:48.289205277Z" level=info msg="StartContainer for \"b131f8a9a01af599e7d240b0f45e4a64f0a2043059a7a7df0a69143ece0ec30a\"" Mar 17 18:41:48.297975 systemd[1]: Started cri-containerd-be75fda21c8c1ea37bd6dc1ea1f0fa3de01b594c9105c8e9fdf66271d4ea8811.scope. Mar 17 18:41:48.307684 systemd[1]: Started cri-containerd-b131f8a9a01af599e7d240b0f45e4a64f0a2043059a7a7df0a69143ece0ec30a.scope. Mar 17 18:41:48.330039 env[1208]: time="2025-03-17T18:41:48.329111576Z" level=info msg="StartContainer for \"fb7a0a67fab9753c352da208fd7ed548011cac113ae7b2c03f0148e7c9a735ad\" returns successfully" Mar 17 18:41:48.347912 env[1208]: time="2025-03-17T18:41:48.347855708Z" level=info msg="StartContainer for \"be75fda21c8c1ea37bd6dc1ea1f0fa3de01b594c9105c8e9fdf66271d4ea8811\" returns successfully" Mar 17 18:41:48.353710 env[1208]: time="2025-03-17T18:41:48.353663880Z" level=info msg="StartContainer for \"b131f8a9a01af599e7d240b0f45e4a64f0a2043059a7a7df0a69143ece0ec30a\" returns successfully" Mar 17 18:41:48.839786 kubelet[1555]: E0317 18:41:48.839615 1555 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 18:41:48.839786 kubelet[1555]: E0317 18:41:48.839720 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:48.841508 kubelet[1555]: E0317 18:41:48.841380 1555 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 18:41:48.841508 kubelet[1555]: E0317 18:41:48.841457 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:48.842866 kubelet[1555]: E0317 18:41:48.842754 1555 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 18:41:48.842866 kubelet[1555]: E0317 18:41:48.842819 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:49.296457 kubelet[1555]: I0317 18:41:49.296348 1555 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 18:41:49.437122 kubelet[1555]: E0317 18:41:49.437072 1555 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 17 18:41:49.524315 kubelet[1555]: I0317 18:41:49.524262 1555 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 17 18:41:49.617253 kubelet[1555]: I0317 18:41:49.617213 1555 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:49.620924 kubelet[1555]: E0317 18:41:49.620887 1555 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:49.620924 kubelet[1555]: I0317 18:41:49.620908 1555 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 17 18:41:49.622218 kubelet[1555]: E0317 18:41:49.622187 1555 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 17 18:41:49.622218 kubelet[1555]: I0317 18:41:49.622219 1555 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:49.623400 kubelet[1555]: E0317 18:41:49.623368 1555 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:49.795611 kubelet[1555]: I0317 18:41:49.795566 1555 apiserver.go:52] "Watching apiserver" Mar 17 18:41:49.818620 kubelet[1555]: I0317 18:41:49.818555 1555 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:41:49.844124 kubelet[1555]: I0317 18:41:49.844086 1555 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 17 18:41:49.844584 kubelet[1555]: I0317 18:41:49.844200 1555 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:49.845880 kubelet[1555]: E0317 18:41:49.845856 1555 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 17 18:41:49.846022 kubelet[1555]: E0317 18:41:49.845986 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:49.846311 kubelet[1555]: E0317 18:41:49.846273 1555 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:49.846509 kubelet[1555]: E0317 18:41:49.846455 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:50.845993 kubelet[1555]: I0317 18:41:50.845960 1555 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:50.849378 kubelet[1555]: E0317 18:41:50.849351 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:51.663004 systemd[1]: Reloading. Mar 17 18:41:51.727916 /usr/lib/systemd/system-generators/torcx-generator[1856]: time="2025-03-17T18:41:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:41:51.727956 /usr/lib/systemd/system-generators/torcx-generator[1856]: time="2025-03-17T18:41:51Z" level=info msg="torcx already run" Mar 17 18:41:51.794842 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:41:51.794860 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:41:51.813828 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:41:51.847045 kubelet[1555]: E0317 18:41:51.847003 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:51.913419 systemd[1]: Stopping kubelet.service... Mar 17 18:41:51.935459 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:41:51.935635 systemd[1]: Stopped kubelet.service. Mar 17 18:41:51.937211 systemd[1]: Starting kubelet.service... Mar 17 18:41:52.018404 systemd[1]: Started kubelet.service. Mar 17 18:41:52.058929 kubelet[1899]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:41:52.058929 kubelet[1899]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 18:41:52.058929 kubelet[1899]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:41:52.059341 kubelet[1899]: I0317 18:41:52.059053 1899 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:41:52.070884 kubelet[1899]: I0317 18:41:52.070845 1899 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 18:41:52.070884 kubelet[1899]: I0317 18:41:52.070877 1899 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:41:52.071212 kubelet[1899]: I0317 18:41:52.071191 1899 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 18:41:52.072359 kubelet[1899]: I0317 18:41:52.072336 1899 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:41:52.074241 kubelet[1899]: I0317 18:41:52.074210 1899 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:41:52.077336 kubelet[1899]: E0317 18:41:52.077308 1899 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:41:52.077395 kubelet[1899]: I0317 18:41:52.077337 1899 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:41:52.082061 kubelet[1899]: I0317 18:41:52.082034 1899 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:41:52.082270 kubelet[1899]: I0317 18:41:52.082239 1899 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:41:52.082514 kubelet[1899]: I0317 18:41:52.082267 1899 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:41:52.082514 kubelet[1899]: I0317 18:41:52.082515 1899 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:41:52.082647 kubelet[1899]: I0317 18:41:52.082525 1899 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 18:41:52.082647 kubelet[1899]: I0317 18:41:52.082570 1899 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:41:52.082998 kubelet[1899]: I0317 18:41:52.082966 1899 kubelet.go:446] "Attempting to sync node with API server" Mar 17 18:41:52.082998 kubelet[1899]: I0317 18:41:52.082995 1899 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:41:52.083223 kubelet[1899]: I0317 18:41:52.083021 1899 kubelet.go:352] "Adding apiserver pod source" Mar 17 18:41:52.083223 kubelet[1899]: I0317 18:41:52.083032 1899 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:41:52.083694 kubelet[1899]: I0317 18:41:52.083651 1899 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:41:52.084148 kubelet[1899]: I0317 18:41:52.084086 1899 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:41:52.084649 kubelet[1899]: I0317 18:41:52.084601 1899 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 18:41:52.084649 kubelet[1899]: I0317 18:41:52.084633 1899 server.go:1287] "Started kubelet" Mar 17 18:41:52.087463 kubelet[1899]: I0317 18:41:52.087392 1899 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:41:52.087733 kubelet[1899]: I0317 18:41:52.087641 1899 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:41:52.087733 kubelet[1899]: I0317 18:41:52.087690 1899 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:41:52.088575 kubelet[1899]: I0317 18:41:52.088552 1899 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:41:52.088667 kubelet[1899]: I0317 18:41:52.088586 1899 server.go:490] "Adding debug handlers to kubelet server" Mar 17 18:41:52.089416 kubelet[1899]: I0317 18:41:52.089392 1899 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:41:52.092828 kubelet[1899]: E0317 18:41:52.092811 1899 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:41:52.092885 kubelet[1899]: I0317 18:41:52.092846 1899 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 18:41:52.093018 kubelet[1899]: I0317 18:41:52.093000 1899 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:41:52.093270 kubelet[1899]: I0317 18:41:52.093248 1899 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:41:52.093619 kubelet[1899]: I0317 18:41:52.093585 1899 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:41:52.094596 kubelet[1899]: I0317 18:41:52.094581 1899 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:41:52.094668 kubelet[1899]: I0317 18:41:52.094654 1899 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:41:52.095504 kubelet[1899]: E0317 18:41:52.095478 1899 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:41:52.104825 kubelet[1899]: I0317 18:41:52.104773 1899 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:41:52.111063 kubelet[1899]: I0317 18:41:52.111021 1899 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:41:52.111063 kubelet[1899]: I0317 18:41:52.111066 1899 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 18:41:52.111197 kubelet[1899]: I0317 18:41:52.111096 1899 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 18:41:52.111197 kubelet[1899]: I0317 18:41:52.111103 1899 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 18:41:52.111197 kubelet[1899]: E0317 18:41:52.111190 1899 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:41:52.122427 kubelet[1899]: I0317 18:41:52.122384 1899 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 18:41:52.122584 kubelet[1899]: I0317 18:41:52.122552 1899 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 18:41:52.122584 kubelet[1899]: I0317 18:41:52.122584 1899 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:41:52.122726 kubelet[1899]: I0317 18:41:52.122702 1899 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:41:52.122726 kubelet[1899]: I0317 18:41:52.122716 1899 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:41:52.122777 kubelet[1899]: I0317 18:41:52.122734 1899 policy_none.go:49] "None policy: Start" Mar 17 18:41:52.122777 kubelet[1899]: I0317 18:41:52.122743 1899 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 18:41:52.122777 kubelet[1899]: I0317 18:41:52.122753 1899 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:41:52.122843 kubelet[1899]: I0317 18:41:52.122833 1899 state_mem.go:75] "Updated machine memory state" Mar 17 18:41:52.126306 kubelet[1899]: I0317 18:41:52.126282 1899 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:41:52.126445 kubelet[1899]: I0317 18:41:52.126422 1899 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:41:52.126473 kubelet[1899]: I0317 18:41:52.126438 1899 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:41:52.126895 kubelet[1899]: I0317 18:41:52.126878 1899 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:41:52.127645 kubelet[1899]: E0317 18:41:52.127614 1899 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 18:41:52.212549 kubelet[1899]: I0317 18:41:52.212436 1899 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:52.212549 kubelet[1899]: I0317 18:41:52.212531 1899 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:52.214460 kubelet[1899]: I0317 18:41:52.213839 1899 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 17 18:41:52.220227 kubelet[1899]: E0317 18:41:52.220118 1899 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:52.242382 kubelet[1899]: I0317 18:41:52.242336 1899 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 18:41:52.247438 kubelet[1899]: I0317 18:41:52.247411 1899 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Mar 17 18:41:52.247526 kubelet[1899]: I0317 18:41:52.247499 1899 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 17 18:41:52.294311 kubelet[1899]: I0317 18:41:52.294251 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66f97343cb741ccaeaea7bda56ca63fe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"66f97343cb741ccaeaea7bda56ca63fe\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:52.294311 kubelet[1899]: I0317 18:41:52.294296 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66f97343cb741ccaeaea7bda56ca63fe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"66f97343cb741ccaeaea7bda56ca63fe\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:52.294491 kubelet[1899]: I0317 18:41:52.294321 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:52.294491 kubelet[1899]: I0317 18:41:52.294360 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:52.294491 kubelet[1899]: I0317 18:41:52.294407 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:52.294491 kubelet[1899]: I0317 18:41:52.294424 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 17 18:41:52.294491 kubelet[1899]: I0317 18:41:52.294438 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66f97343cb741ccaeaea7bda56ca63fe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"66f97343cb741ccaeaea7bda56ca63fe\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:52.294701 kubelet[1899]: I0317 18:41:52.294453 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:52.294701 kubelet[1899]: I0317 18:41:52.294471 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:52.521133 kubelet[1899]: E0317 18:41:52.521005 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:52.521133 kubelet[1899]: E0317 18:41:52.521092 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:52.521298 kubelet[1899]: E0317 18:41:52.521247 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:52.662251 sudo[1933]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:41:52.662444 sudo[1933]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 18:41:53.084607 kubelet[1899]: I0317 18:41:53.084548 1899 apiserver.go:52] "Watching apiserver" Mar 17 18:41:53.093424 kubelet[1899]: I0317 18:41:53.093382 1899 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:41:53.122746 kubelet[1899]: I0317 18:41:53.122702 1899 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 17 18:41:53.122953 kubelet[1899]: I0317 18:41:53.122923 1899 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:53.123191 kubelet[1899]: I0317 18:41:53.123169 1899 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:53.130349 kubelet[1899]: E0317 18:41:53.130295 1899 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:53.130537 kubelet[1899]: E0317 18:41:53.130521 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:53.130812 kubelet[1899]: E0317 18:41:53.130789 1899 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 17 18:41:53.131084 kubelet[1899]: E0317 18:41:53.131056 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:53.131744 kubelet[1899]: E0317 18:41:53.131724 1899 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:53.131949 kubelet[1899]: E0317 18:41:53.131909 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:53.133667 sudo[1933]: pam_unix(sudo:session): session closed for user root Mar 17 18:41:53.148857 kubelet[1899]: I0317 18:41:53.148786 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.148764766 podStartE2EDuration="1.148764766s" podCreationTimestamp="2025-03-17 18:41:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:41:53.141842314 +0000 UTC m=+1.118733656" watchObservedRunningTime="2025-03-17 18:41:53.148764766 +0000 UTC m=+1.125656108" Mar 17 18:41:53.156263 kubelet[1899]: I0317 18:41:53.156216 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.156198306 podStartE2EDuration="1.156198306s" podCreationTimestamp="2025-03-17 18:41:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:41:53.149101086 +0000 UTC m=+1.125992429" watchObservedRunningTime="2025-03-17 18:41:53.156198306 +0000 UTC m=+1.133089648" Mar 17 18:41:53.165460 kubelet[1899]: I0317 18:41:53.165421 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.1654083379999998 podStartE2EDuration="3.165408338s" podCreationTimestamp="2025-03-17 18:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:41:53.156506194 +0000 UTC m=+1.133397536" watchObservedRunningTime="2025-03-17 18:41:53.165408338 +0000 UTC m=+1.142299670" Mar 17 18:41:54.124413 kubelet[1899]: E0317 18:41:54.124370 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:54.124802 kubelet[1899]: E0317 18:41:54.124578 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:54.124802 kubelet[1899]: E0317 18:41:54.124588 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:54.837047 sudo[1305]: pam_unix(sudo:session): session closed for user root Mar 17 18:41:54.838222 sshd[1302]: pam_unix(sshd:session): session closed for user core Mar 17 18:41:54.840646 systemd[1]: sshd@4-10.0.0.88:22-10.0.0.1:48386.service: Deactivated successfully. Mar 17 18:41:54.841470 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:41:54.841602 systemd[1]: session-5.scope: Consumed 4.730s CPU time. Mar 17 18:41:54.842255 systemd-logind[1193]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:41:54.843058 systemd-logind[1193]: Removed session 5. Mar 17 18:41:56.353759 kubelet[1899]: E0317 18:41:56.353721 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:57.529482 kubelet[1899]: E0317 18:41:57.529445 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:58.373539 kubelet[1899]: I0317 18:41:58.373502 1899 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:41:58.373976 env[1208]: time="2025-03-17T18:41:58.373936039Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:41:58.374388 kubelet[1899]: I0317 18:41:58.374226 1899 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:41:59.116337 systemd[1]: Created slice kubepods-besteffort-podc822c83d_c708_4d83_b061_34b918e6da62.slice. Mar 17 18:41:59.128794 systemd[1]: Created slice kubepods-burstable-pod7f557670_6ff9_4cba_8ec5_9ea555a65a13.slice. Mar 17 18:41:59.139625 kubelet[1899]: I0317 18:41:59.139580 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-cilium-cgroup\") pod \"cilium-wr22t\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " pod="kube-system/cilium-wr22t" Mar 17 18:41:59.139625 kubelet[1899]: I0317 18:41:59.139622 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-etc-cni-netd\") pod \"cilium-wr22t\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " pod="kube-system/cilium-wr22t" Mar 17 18:41:59.140052 kubelet[1899]: I0317 18:41:59.139641 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqzx7\" (UniqueName: \"kubernetes.io/projected/7f557670-6ff9-4cba-8ec5-9ea555a65a13-kube-api-access-nqzx7\") pod \"cilium-wr22t\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " pod="kube-system/cilium-wr22t" Mar 17 18:41:59.140052 kubelet[1899]: I0317 18:41:59.139694 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-cni-path\") pod \"cilium-wr22t\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " pod="kube-system/cilium-wr22t" Mar 17 18:41:59.140052 kubelet[1899]: I0317 18:41:59.139722 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-xtables-lock\") pod \"cilium-wr22t\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " pod="kube-system/cilium-wr22t" Mar 17 18:41:59.140052 kubelet[1899]: I0317 18:41:59.139744 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c822c83d-c708-4d83-b061-34b918e6da62-kube-proxy\") pod \"kube-proxy-db72n\" (UID: \"c822c83d-c708-4d83-b061-34b918e6da62\") " pod="kube-system/kube-proxy-db72n" Mar 17 18:41:59.140052 kubelet[1899]: I0317 18:41:59.139756 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c822c83d-c708-4d83-b061-34b918e6da62-lib-modules\") pod \"kube-proxy-db72n\" (UID: \"c822c83d-c708-4d83-b061-34b918e6da62\") " pod="kube-system/kube-proxy-db72n" Mar 17 18:41:59.140052 kubelet[1899]: I0317 18:41:59.139771 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-hostproc\") pod \"cilium-wr22t\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " pod="kube-system/cilium-wr22t" Mar 17 18:41:59.140266 kubelet[1899]: I0317 18:41:59.139794 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-bpf-maps\") pod \"cilium-wr22t\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " pod="kube-system/cilium-wr22t" Mar 17 18:41:59.140266 kubelet[1899]: I0317 18:41:59.139808 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f557670-6ff9-4cba-8ec5-9ea555a65a13-clustermesh-secrets\") pod \"cilium-wr22t\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " pod="kube-system/cilium-wr22t" Mar 17 18:41:59.140266 kubelet[1899]: I0317 18:41:59.139823 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2bnz\" (UniqueName: \"kubernetes.io/projected/c822c83d-c708-4d83-b061-34b918e6da62-kube-api-access-n2bnz\") pod \"kube-proxy-db72n\" (UID: \"c822c83d-c708-4d83-b061-34b918e6da62\") " pod="kube-system/kube-proxy-db72n" Mar 17 18:41:59.140266 kubelet[1899]: I0317 18:41:59.139838 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-cilium-run\") pod \"cilium-wr22t\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " pod="kube-system/cilium-wr22t" Mar 17 18:41:59.140266 kubelet[1899]: I0317 18:41:59.139851 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-host-proc-sys-kernel\") pod \"cilium-wr22t\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " pod="kube-system/cilium-wr22t" Mar 17 18:41:59.140401 kubelet[1899]: I0317 18:41:59.139864 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c822c83d-c708-4d83-b061-34b918e6da62-xtables-lock\") pod \"kube-proxy-db72n\" (UID: \"c822c83d-c708-4d83-b061-34b918e6da62\") " pod="kube-system/kube-proxy-db72n" Mar 17 18:41:59.140401 kubelet[1899]: I0317 18:41:59.139876 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-lib-modules\") pod \"cilium-wr22t\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " pod="kube-system/cilium-wr22t" Mar 17 18:41:59.140401 kubelet[1899]: I0317 18:41:59.139897 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f557670-6ff9-4cba-8ec5-9ea555a65a13-cilium-config-path\") pod \"cilium-wr22t\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " pod="kube-system/cilium-wr22t" Mar 17 18:41:59.140401 kubelet[1899]: I0317 18:41:59.139909 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-host-proc-sys-net\") pod \"cilium-wr22t\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " pod="kube-system/cilium-wr22t" Mar 17 18:41:59.140401 kubelet[1899]: I0317 18:41:59.139961 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f557670-6ff9-4cba-8ec5-9ea555a65a13-hubble-tls\") pod \"cilium-wr22t\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " pod="kube-system/cilium-wr22t" Mar 17 18:41:59.241327 kubelet[1899]: I0317 18:41:59.241279 1899 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 17 18:41:59.247476 kubelet[1899]: E0317 18:41:59.247448 1899 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 17 18:41:59.247631 kubelet[1899]: E0317 18:41:59.247615 1899 projected.go:194] Error preparing data for projected volume kube-api-access-n2bnz for pod kube-system/kube-proxy-db72n: configmap "kube-root-ca.crt" not found Mar 17 18:41:59.247758 kubelet[1899]: E0317 18:41:59.247741 1899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c822c83d-c708-4d83-b061-34b918e6da62-kube-api-access-n2bnz podName:c822c83d-c708-4d83-b061-34b918e6da62 nodeName:}" failed. No retries permitted until 2025-03-17 18:41:59.747721134 +0000 UTC m=+7.724612476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n2bnz" (UniqueName: "kubernetes.io/projected/c822c83d-c708-4d83-b061-34b918e6da62-kube-api-access-n2bnz") pod "kube-proxy-db72n" (UID: "c822c83d-c708-4d83-b061-34b918e6da62") : configmap "kube-root-ca.crt" not found Mar 17 18:41:59.248599 kubelet[1899]: E0317 18:41:59.248584 1899 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 17 18:41:59.248678 kubelet[1899]: E0317 18:41:59.248663 1899 projected.go:194] Error preparing data for projected volume kube-api-access-nqzx7 for pod kube-system/cilium-wr22t: configmap "kube-root-ca.crt" not found Mar 17 18:41:59.248778 kubelet[1899]: E0317 18:41:59.248763 1899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f557670-6ff9-4cba-8ec5-9ea555a65a13-kube-api-access-nqzx7 podName:7f557670-6ff9-4cba-8ec5-9ea555a65a13 nodeName:}" failed. No retries permitted until 2025-03-17 18:41:59.748753407 +0000 UTC m=+7.725644749 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nqzx7" (UniqueName: "kubernetes.io/projected/7f557670-6ff9-4cba-8ec5-9ea555a65a13-kube-api-access-nqzx7") pod "cilium-wr22t" (UID: "7f557670-6ff9-4cba-8ec5-9ea555a65a13") : configmap "kube-root-ca.crt" not found Mar 17 18:41:59.437966 systemd[1]: Created slice kubepods-besteffort-pod06bff352_5055_4d90_aed6_7b0b9753ac82.slice. Mar 17 18:41:59.441353 kubelet[1899]: I0317 18:41:59.441330 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp5wp\" (UniqueName: \"kubernetes.io/projected/06bff352-5055-4d90-aed6-7b0b9753ac82-kube-api-access-dp5wp\") pod \"cilium-operator-6c4d7847fc-p4pcx\" (UID: \"06bff352-5055-4d90-aed6-7b0b9753ac82\") " pod="kube-system/cilium-operator-6c4d7847fc-p4pcx" Mar 17 18:41:59.441536 kubelet[1899]: I0317 18:41:59.441521 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06bff352-5055-4d90-aed6-7b0b9753ac82-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-p4pcx\" (UID: \"06bff352-5055-4d90-aed6-7b0b9753ac82\") " pod="kube-system/cilium-operator-6c4d7847fc-p4pcx" Mar 17 18:41:59.742452 kubelet[1899]: E0317 18:41:59.742092 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:59.742771 env[1208]: time="2025-03-17T18:41:59.742709680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-p4pcx,Uid:06bff352-5055-4d90-aed6-7b0b9753ac82,Namespace:kube-system,Attempt:0,}" Mar 17 18:41:59.764388 env[1208]: time="2025-03-17T18:41:59.764315318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:41:59.764388 env[1208]: time="2025-03-17T18:41:59.764356357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:41:59.764388 env[1208]: time="2025-03-17T18:41:59.764367377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:41:59.764613 env[1208]: time="2025-03-17T18:41:59.764494312Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9096e82f36b6e4fb868b9f2149d4763d248e25064144deb0757d7947ca07beaa pid=1986 runtime=io.containerd.runc.v2 Mar 17 18:41:59.775904 systemd[1]: Started cri-containerd-9096e82f36b6e4fb868b9f2149d4763d248e25064144deb0757d7947ca07beaa.scope. Mar 17 18:41:59.813486 env[1208]: time="2025-03-17T18:41:59.813421556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-p4pcx,Uid:06bff352-5055-4d90-aed6-7b0b9753ac82,Namespace:kube-system,Attempt:0,} returns sandbox id \"9096e82f36b6e4fb868b9f2149d4763d248e25064144deb0757d7947ca07beaa\"" Mar 17 18:41:59.814179 kubelet[1899]: E0317 18:41:59.814134 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:59.815123 env[1208]: time="2025-03-17T18:41:59.815070934Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:42:00.025895 kubelet[1899]: E0317 18:42:00.025766 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:00.026717 env[1208]: time="2025-03-17T18:42:00.026679168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-db72n,Uid:c822c83d-c708-4d83-b061-34b918e6da62,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:00.031378 kubelet[1899]: E0317 18:42:00.031359 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:00.031699 env[1208]: time="2025-03-17T18:42:00.031652155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wr22t,Uid:7f557670-6ff9-4cba-8ec5-9ea555a65a13,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:00.044517 env[1208]: time="2025-03-17T18:42:00.044453196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:00.044660 env[1208]: time="2025-03-17T18:42:00.044517720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:00.044660 env[1208]: time="2025-03-17T18:42:00.044531375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:00.044756 env[1208]: time="2025-03-17T18:42:00.044687185Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4297770c453bf92f2f3ed4519f75c474e0065c377e9daa715eecb94275a8a2c pid=2030 runtime=io.containerd.runc.v2 Mar 17 18:42:00.050273 env[1208]: time="2025-03-17T18:42:00.050209976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:00.050273 env[1208]: time="2025-03-17T18:42:00.050270803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:00.050460 env[1208]: time="2025-03-17T18:42:00.050292184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:00.050460 env[1208]: time="2025-03-17T18:42:00.050396534Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273 pid=2049 runtime=io.containerd.runc.v2 Mar 17 18:42:00.054889 systemd[1]: Started cri-containerd-b4297770c453bf92f2f3ed4519f75c474e0065c377e9daa715eecb94275a8a2c.scope. Mar 17 18:42:00.062463 systemd[1]: Started cri-containerd-2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273.scope. Mar 17 18:42:00.081318 env[1208]: time="2025-03-17T18:42:00.080781298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-db72n,Uid:c822c83d-c708-4d83-b061-34b918e6da62,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4297770c453bf92f2f3ed4519f75c474e0065c377e9daa715eecb94275a8a2c\"" Mar 17 18:42:00.081546 kubelet[1899]: E0317 18:42:00.081373 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:00.083525 env[1208]: time="2025-03-17T18:42:00.083473532Z" level=info msg="CreateContainer within sandbox \"b4297770c453bf92f2f3ed4519f75c474e0065c377e9daa715eecb94275a8a2c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:42:00.094559 env[1208]: time="2025-03-17T18:42:00.094506261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wr22t,Uid:7f557670-6ff9-4cba-8ec5-9ea555a65a13,Namespace:kube-system,Attempt:0,} returns sandbox id \"2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273\"" Mar 17 18:42:00.095070 kubelet[1899]: E0317 18:42:00.095046 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:00.106506 env[1208]: time="2025-03-17T18:42:00.106435189Z" level=info msg="CreateContainer within sandbox \"b4297770c453bf92f2f3ed4519f75c474e0065c377e9daa715eecb94275a8a2c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ff29384121415709107c432bef78504ccfca7e67ffe8b1c25a03736bf0a00e76\"" Mar 17 18:42:00.107178 env[1208]: time="2025-03-17T18:42:00.107080917Z" level=info msg="StartContainer for \"ff29384121415709107c432bef78504ccfca7e67ffe8b1c25a03736bf0a00e76\"" Mar 17 18:42:00.121316 systemd[1]: Started cri-containerd-ff29384121415709107c432bef78504ccfca7e67ffe8b1c25a03736bf0a00e76.scope. Mar 17 18:42:00.145197 env[1208]: time="2025-03-17T18:42:00.145114430Z" level=info msg="StartContainer for \"ff29384121415709107c432bef78504ccfca7e67ffe8b1c25a03736bf0a00e76\" returns successfully" Mar 17 18:42:01.137581 kubelet[1899]: E0317 18:42:01.137551 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:01.146442 kubelet[1899]: I0317 18:42:01.146400 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-db72n" podStartSLOduration=2.146382323 podStartE2EDuration="2.146382323s" podCreationTimestamp="2025-03-17 18:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:42:01.146194123 +0000 UTC m=+9.123085465" watchObservedRunningTime="2025-03-17 18:42:01.146382323 +0000 UTC m=+9.123273655" Mar 17 18:42:01.548467 kubelet[1899]: E0317 18:42:01.548106 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:02.138619 kubelet[1899]: E0317 18:42:02.138580 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:02.138961 kubelet[1899]: E0317 18:42:02.138736 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:02.978244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2304497670.mount: Deactivated successfully. Mar 17 18:42:03.833379 env[1208]: time="2025-03-17T18:42:03.833320077Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:03.835647 env[1208]: time="2025-03-17T18:42:03.835586377Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:03.837378 env[1208]: time="2025-03-17T18:42:03.837328405Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:03.837741 env[1208]: time="2025-03-17T18:42:03.837702630Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 18:42:03.838865 env[1208]: time="2025-03-17T18:42:03.838831457Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:42:03.839721 env[1208]: time="2025-03-17T18:42:03.839688644Z" level=info msg="CreateContainer within sandbox \"9096e82f36b6e4fb868b9f2149d4763d248e25064144deb0757d7947ca07beaa\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:42:03.852607 env[1208]: time="2025-03-17T18:42:03.852562378Z" level=info msg="CreateContainer within sandbox \"9096e82f36b6e4fb868b9f2149d4763d248e25064144deb0757d7947ca07beaa\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5\"" Mar 17 18:42:03.853181 env[1208]: time="2025-03-17T18:42:03.853072222Z" level=info msg="StartContainer for \"a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5\"" Mar 17 18:42:03.866955 systemd[1]: Started cri-containerd-a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5.scope. Mar 17 18:42:03.889266 env[1208]: time="2025-03-17T18:42:03.889205701Z" level=info msg="StartContainer for \"a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5\" returns successfully" Mar 17 18:42:04.144409 kubelet[1899]: E0317 18:42:04.143898 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:05.145357 kubelet[1899]: E0317 18:42:05.145315 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:06.358066 kubelet[1899]: E0317 18:42:06.357973 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:06.367042 kubelet[1899]: I0317 18:42:06.366993 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-p4pcx" podStartSLOduration=3.343102687 podStartE2EDuration="7.366978552s" podCreationTimestamp="2025-03-17 18:41:59 +0000 UTC" firstStartedPulling="2025-03-17 18:41:59.814731353 +0000 UTC m=+7.791622695" lastFinishedPulling="2025-03-17 18:42:03.838607218 +0000 UTC m=+11.815498560" observedRunningTime="2025-03-17 18:42:04.15141369 +0000 UTC m=+12.128305032" watchObservedRunningTime="2025-03-17 18:42:06.366978552 +0000 UTC m=+14.343869914" Mar 17 18:42:07.148776 kubelet[1899]: E0317 18:42:07.148738 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:07.600365 kubelet[1899]: E0317 18:42:07.600329 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:08.445726 update_engine[1196]: I0317 18:42:08.445669 1196 update_attempter.cc:509] Updating boot flags... Mar 17 18:42:11.043673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2706959708.mount: Deactivated successfully. Mar 17 18:42:14.308842 env[1208]: time="2025-03-17T18:42:14.308770006Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:14.311222 env[1208]: time="2025-03-17T18:42:14.311185597Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:14.314456 env[1208]: time="2025-03-17T18:42:14.314431932Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:14.314968 env[1208]: time="2025-03-17T18:42:14.314942487Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 18:42:14.321881 env[1208]: time="2025-03-17T18:42:14.321844640Z" level=info msg="CreateContainer within sandbox \"2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:42:14.336596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1070738120.mount: Deactivated successfully. Mar 17 18:42:14.337934 env[1208]: time="2025-03-17T18:42:14.337881491Z" level=info msg="CreateContainer within sandbox \"2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"395475e11573e35094888432f96b427495e802e1236af93653529def8cec1fd5\"" Mar 17 18:42:14.338393 env[1208]: time="2025-03-17T18:42:14.338367932Z" level=info msg="StartContainer for \"395475e11573e35094888432f96b427495e802e1236af93653529def8cec1fd5\"" Mar 17 18:42:14.359459 systemd[1]: Started cri-containerd-395475e11573e35094888432f96b427495e802e1236af93653529def8cec1fd5.scope. Mar 17 18:42:14.390633 systemd[1]: cri-containerd-395475e11573e35094888432f96b427495e802e1236af93653529def8cec1fd5.scope: Deactivated successfully. Mar 17 18:42:14.426288 env[1208]: time="2025-03-17T18:42:14.426249304Z" level=info msg="StartContainer for \"395475e11573e35094888432f96b427495e802e1236af93653529def8cec1fd5\" returns successfully" Mar 17 18:42:14.724993 env[1208]: time="2025-03-17T18:42:14.724854148Z" level=info msg="shim disconnected" id=395475e11573e35094888432f96b427495e802e1236af93653529def8cec1fd5 Mar 17 18:42:14.724993 env[1208]: time="2025-03-17T18:42:14.724905055Z" level=warning msg="cleaning up after shim disconnected" id=395475e11573e35094888432f96b427495e802e1236af93653529def8cec1fd5 namespace=k8s.io Mar 17 18:42:14.724993 env[1208]: time="2025-03-17T18:42:14.724917398Z" level=info msg="cleaning up dead shim" Mar 17 18:42:14.731413 env[1208]: time="2025-03-17T18:42:14.731370040Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:42:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2379 runtime=io.containerd.runc.v2\n" Mar 17 18:42:15.160157 kubelet[1899]: E0317 18:42:15.160118 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:15.161690 env[1208]: time="2025-03-17T18:42:15.161649820Z" level=info msg="CreateContainer within sandbox \"2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:42:15.193092 env[1208]: time="2025-03-17T18:42:15.192897696Z" level=info msg="CreateContainer within sandbox \"2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dfcb134671e131ad7f94806bdcd705178b45c196e2de1b43277c880b37e92363\"" Mar 17 18:42:15.194099 env[1208]: time="2025-03-17T18:42:15.194045537Z" level=info msg="StartContainer for \"dfcb134671e131ad7f94806bdcd705178b45c196e2de1b43277c880b37e92363\"" Mar 17 18:42:15.212779 systemd[1]: Started cri-containerd-dfcb134671e131ad7f94806bdcd705178b45c196e2de1b43277c880b37e92363.scope. Mar 17 18:42:15.236532 env[1208]: time="2025-03-17T18:42:15.236490412Z" level=info msg="StartContainer for \"dfcb134671e131ad7f94806bdcd705178b45c196e2de1b43277c880b37e92363\" returns successfully" Mar 17 18:42:15.246690 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:42:15.246883 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:42:15.247268 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:42:15.248699 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:42:15.250509 systemd[1]: cri-containerd-dfcb134671e131ad7f94806bdcd705178b45c196e2de1b43277c880b37e92363.scope: Deactivated successfully. Mar 17 18:42:15.259700 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:42:15.273840 env[1208]: time="2025-03-17T18:42:15.273795224Z" level=info msg="shim disconnected" id=dfcb134671e131ad7f94806bdcd705178b45c196e2de1b43277c880b37e92363 Mar 17 18:42:15.273977 env[1208]: time="2025-03-17T18:42:15.273842433Z" level=warning msg="cleaning up after shim disconnected" id=dfcb134671e131ad7f94806bdcd705178b45c196e2de1b43277c880b37e92363 namespace=k8s.io Mar 17 18:42:15.273977 env[1208]: time="2025-03-17T18:42:15.273853223Z" level=info msg="cleaning up dead shim" Mar 17 18:42:15.279527 env[1208]: time="2025-03-17T18:42:15.279472292Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:42:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2444 runtime=io.containerd.runc.v2\n" Mar 17 18:42:15.333301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-395475e11573e35094888432f96b427495e802e1236af93653529def8cec1fd5-rootfs.mount: Deactivated successfully. Mar 17 18:42:16.162875 kubelet[1899]: E0317 18:42:16.162838 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:16.164443 env[1208]: time="2025-03-17T18:42:16.164400292Z" level=info msg="CreateContainer within sandbox \"2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:42:16.400787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3153502789.mount: Deactivated successfully. Mar 17 18:42:16.513371 env[1208]: time="2025-03-17T18:42:16.513240313Z" level=info msg="CreateContainer within sandbox \"2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be\"" Mar 17 18:42:16.513782 env[1208]: time="2025-03-17T18:42:16.513740989Z" level=info msg="StartContainer for \"dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be\"" Mar 17 18:42:16.534703 systemd[1]: run-containerd-runc-k8s.io-dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be-runc.uSrIlf.mount: Deactivated successfully. Mar 17 18:42:16.538697 systemd[1]: Started cri-containerd-dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be.scope. Mar 17 18:42:16.565766 env[1208]: time="2025-03-17T18:42:16.565721848Z" level=info msg="StartContainer for \"dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be\" returns successfully" Mar 17 18:42:16.568009 systemd[1]: cri-containerd-dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be.scope: Deactivated successfully. Mar 17 18:42:16.589877 env[1208]: time="2025-03-17T18:42:16.589827260Z" level=info msg="shim disconnected" id=dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be Mar 17 18:42:16.590030 env[1208]: time="2025-03-17T18:42:16.589878397Z" level=warning msg="cleaning up after shim disconnected" id=dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be namespace=k8s.io Mar 17 18:42:16.590030 env[1208]: time="2025-03-17T18:42:16.589890880Z" level=info msg="cleaning up dead shim" Mar 17 18:42:16.596253 env[1208]: time="2025-03-17T18:42:16.596217360Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:42:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2500 runtime=io.containerd.runc.v2\n" Mar 17 18:42:17.167094 kubelet[1899]: E0317 18:42:17.167055 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:17.169201 env[1208]: time="2025-03-17T18:42:17.169132416Z" level=info msg="CreateContainer within sandbox \"2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:42:17.399198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be-rootfs.mount: Deactivated successfully. Mar 17 18:42:17.520639 env[1208]: time="2025-03-17T18:42:17.520513378Z" level=info msg="CreateContainer within sandbox \"2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43\"" Mar 17 18:42:17.521298 env[1208]: time="2025-03-17T18:42:17.520948039Z" level=info msg="StartContainer for \"c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43\"" Mar 17 18:42:17.539225 systemd[1]: Started cri-containerd-c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43.scope. Mar 17 18:42:17.566594 systemd[1]: cri-containerd-c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43.scope: Deactivated successfully. Mar 17 18:42:17.567299 env[1208]: time="2025-03-17T18:42:17.567231013Z" level=info msg="StartContainer for \"c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43\" returns successfully" Mar 17 18:42:17.590771 env[1208]: time="2025-03-17T18:42:17.590720608Z" level=info msg="shim disconnected" id=c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43 Mar 17 18:42:17.590915 env[1208]: time="2025-03-17T18:42:17.590772897Z" level=warning msg="cleaning up after shim disconnected" id=c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43 namespace=k8s.io Mar 17 18:42:17.590915 env[1208]: time="2025-03-17T18:42:17.590785841Z" level=info msg="cleaning up dead shim" Mar 17 18:42:17.597726 env[1208]: time="2025-03-17T18:42:17.597684817Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:42:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2553 runtime=io.containerd.runc.v2\n" Mar 17 18:42:18.039436 systemd[1]: Started sshd@5-10.0.0.88:22-10.0.0.1:35670.service. Mar 17 18:42:18.075087 sshd[2566]: Accepted publickey for core from 10.0.0.1 port 35670 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:42:18.076738 sshd[2566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:18.081332 systemd-logind[1193]: New session 6 of user core. Mar 17 18:42:18.082354 systemd[1]: Started session-6.scope. Mar 17 18:42:18.170381 kubelet[1899]: E0317 18:42:18.170342 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:18.172384 env[1208]: time="2025-03-17T18:42:18.172348345Z" level=info msg="CreateContainer within sandbox \"2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:42:18.189476 env[1208]: time="2025-03-17T18:42:18.189434946Z" level=info msg="CreateContainer within sandbox \"2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2\"" Mar 17 18:42:18.190181 env[1208]: time="2025-03-17T18:42:18.190130419Z" level=info msg="StartContainer for \"37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2\"" Mar 17 18:42:18.208959 systemd[1]: Started cri-containerd-37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2.scope. Mar 17 18:42:18.222921 sshd[2566]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:18.226587 systemd-logind[1193]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:42:18.226917 systemd[1]: sshd@5-10.0.0.88:22-10.0.0.1:35670.service: Deactivated successfully. Mar 17 18:42:18.227617 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:42:18.228750 systemd-logind[1193]: Removed session 6. Mar 17 18:42:18.241534 env[1208]: time="2025-03-17T18:42:18.241492405Z" level=info msg="StartContainer for \"37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2\" returns successfully" Mar 17 18:42:18.400027 systemd[1]: run-containerd-runc-k8s.io-c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43-runc.s6x3ur.mount: Deactivated successfully. Mar 17 18:42:18.400217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43-rootfs.mount: Deactivated successfully. Mar 17 18:42:18.404332 kubelet[1899]: I0317 18:42:18.404307 1899 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 17 18:42:18.436704 systemd[1]: Created slice kubepods-burstable-podcef4fd3e_2e37_4ed3_a7f2_270b3fbd675d.slice. Mar 17 18:42:18.442566 systemd[1]: Created slice kubepods-burstable-podad7e6ede_f750_4110_8a8f_9a6a41dcffbd.slice. Mar 17 18:42:18.563255 kubelet[1899]: I0317 18:42:18.563215 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad7e6ede-f750-4110-8a8f-9a6a41dcffbd-config-volume\") pod \"coredns-668d6bf9bc-rwnhh\" (UID: \"ad7e6ede-f750-4110-8a8f-9a6a41dcffbd\") " pod="kube-system/coredns-668d6bf9bc-rwnhh" Mar 17 18:42:18.563255 kubelet[1899]: I0317 18:42:18.563260 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q2dm\" (UniqueName: \"kubernetes.io/projected/cef4fd3e-2e37-4ed3-a7f2-270b3fbd675d-kube-api-access-5q2dm\") pod \"coredns-668d6bf9bc-kggkc\" (UID: \"cef4fd3e-2e37-4ed3-a7f2-270b3fbd675d\") " pod="kube-system/coredns-668d6bf9bc-kggkc" Mar 17 18:42:18.563440 kubelet[1899]: I0317 18:42:18.563287 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cef4fd3e-2e37-4ed3-a7f2-270b3fbd675d-config-volume\") pod \"coredns-668d6bf9bc-kggkc\" (UID: \"cef4fd3e-2e37-4ed3-a7f2-270b3fbd675d\") " pod="kube-system/coredns-668d6bf9bc-kggkc" Mar 17 18:42:18.563440 kubelet[1899]: I0317 18:42:18.563307 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt8xf\" (UniqueName: \"kubernetes.io/projected/ad7e6ede-f750-4110-8a8f-9a6a41dcffbd-kube-api-access-pt8xf\") pod \"coredns-668d6bf9bc-rwnhh\" (UID: \"ad7e6ede-f750-4110-8a8f-9a6a41dcffbd\") " pod="kube-system/coredns-668d6bf9bc-rwnhh" Mar 17 18:42:18.742074 kubelet[1899]: E0317 18:42:18.741909 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:18.742782 env[1208]: time="2025-03-17T18:42:18.742537961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kggkc,Uid:cef4fd3e-2e37-4ed3-a7f2-270b3fbd675d,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:18.746175 kubelet[1899]: E0317 18:42:18.746128 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:18.746397 env[1208]: time="2025-03-17T18:42:18.746371581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rwnhh,Uid:ad7e6ede-f750-4110-8a8f-9a6a41dcffbd,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:19.177050 kubelet[1899]: E0317 18:42:19.176768 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:19.192247 kubelet[1899]: I0317 18:42:19.192157 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wr22t" podStartSLOduration=5.970353005 podStartE2EDuration="20.192117535s" podCreationTimestamp="2025-03-17 18:41:59 +0000 UTC" firstStartedPulling="2025-03-17 18:42:00.095697266 +0000 UTC m=+8.072588608" lastFinishedPulling="2025-03-17 18:42:14.317461796 +0000 UTC m=+22.294353138" observedRunningTime="2025-03-17 18:42:19.191196978 +0000 UTC m=+27.168088340" watchObservedRunningTime="2025-03-17 18:42:19.192117535 +0000 UTC m=+27.169008897" Mar 17 18:42:20.177115 kubelet[1899]: E0317 18:42:20.177072 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:20.433070 systemd-networkd[1031]: cilium_host: Link UP Mar 17 18:42:20.433224 systemd-networkd[1031]: cilium_net: Link UP Mar 17 18:42:20.434015 systemd-networkd[1031]: cilium_net: Gained carrier Mar 17 18:42:20.435057 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 18:42:20.435125 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:42:20.435254 systemd-networkd[1031]: cilium_host: Gained carrier Mar 17 18:42:20.512736 systemd-networkd[1031]: cilium_vxlan: Link UP Mar 17 18:42:20.512745 systemd-networkd[1031]: cilium_vxlan: Gained carrier Mar 17 18:42:20.564256 systemd-networkd[1031]: cilium_net: Gained IPv6LL Mar 17 18:42:20.716185 kernel: NET: Registered PF_ALG protocol family Mar 17 18:42:20.820264 systemd-networkd[1031]: cilium_host: Gained IPv6LL Mar 17 18:42:21.179928 kubelet[1899]: E0317 18:42:21.179814 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:21.252972 systemd-networkd[1031]: lxc_health: Link UP Mar 17 18:42:21.274171 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:42:21.274175 systemd-networkd[1031]: lxc_health: Gained carrier Mar 17 18:42:21.684424 systemd-networkd[1031]: cilium_vxlan: Gained IPv6LL Mar 17 18:42:21.824490 systemd-networkd[1031]: lxc62953141da4c: Link UP Mar 17 18:42:21.832174 kernel: eth0: renamed from tmp99622 Mar 17 18:42:21.840175 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:42:21.840276 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc62953141da4c: link becomes ready Mar 17 18:42:21.841651 systemd-networkd[1031]: lxc62953141da4c: Gained carrier Mar 17 18:42:21.845903 systemd-networkd[1031]: lxcb2721f798cd2: Link UP Mar 17 18:42:21.853174 kernel: eth0: renamed from tmpec792 Mar 17 18:42:21.860719 systemd-networkd[1031]: lxcb2721f798cd2: Gained carrier Mar 17 18:42:21.861206 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb2721f798cd2: link becomes ready Mar 17 18:42:22.180161 kubelet[1899]: E0317 18:42:22.180106 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:23.092323 systemd-networkd[1031]: lxc_health: Gained IPv6LL Mar 17 18:42:23.182394 kubelet[1899]: E0317 18:42:23.182347 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:23.228374 systemd[1]: Started sshd@6-10.0.0.88:22-10.0.0.1:35684.service. Mar 17 18:42:23.262722 sshd[3121]: Accepted publickey for core from 10.0.0.1 port 35684 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:42:23.264185 sshd[3121]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:23.268960 systemd-logind[1193]: New session 7 of user core. Mar 17 18:42:23.269048 systemd[1]: Started session-7.scope. Mar 17 18:42:23.421448 sshd[3121]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:23.424368 systemd[1]: sshd@6-10.0.0.88:22-10.0.0.1:35684.service: Deactivated successfully. Mar 17 18:42:23.425218 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:42:23.425925 systemd-logind[1193]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:42:23.426707 systemd-logind[1193]: Removed session 7. Mar 17 18:42:23.540312 systemd-networkd[1031]: lxc62953141da4c: Gained IPv6LL Mar 17 18:42:23.860316 systemd-networkd[1031]: lxcb2721f798cd2: Gained IPv6LL Mar 17 18:42:24.184826 kubelet[1899]: E0317 18:42:24.184668 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:25.163699 env[1208]: time="2025-03-17T18:42:25.163614475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:25.163699 env[1208]: time="2025-03-17T18:42:25.163662154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:25.163699 env[1208]: time="2025-03-17T18:42:25.163672174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:25.164209 env[1208]: time="2025-03-17T18:42:25.163915072Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/99622f7747a7d1d300ea2a7594ad5172eac3f7cb7f62c4b3c16f9a33464deafe pid=3152 runtime=io.containerd.runc.v2 Mar 17 18:42:25.179191 env[1208]: time="2025-03-17T18:42:25.178805352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:25.179191 env[1208]: time="2025-03-17T18:42:25.178836350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:25.179191 env[1208]: time="2025-03-17T18:42:25.178845998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:25.179191 env[1208]: time="2025-03-17T18:42:25.179009607Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec7922bcaf4a5ca6079b56155b3c1234887fc148044e1db5a3548cb0cf41be1b pid=3178 runtime=io.containerd.runc.v2 Mar 17 18:42:25.184058 systemd[1]: run-containerd-runc-k8s.io-99622f7747a7d1d300ea2a7594ad5172eac3f7cb7f62c4b3c16f9a33464deafe-runc.uCOZxH.mount: Deactivated successfully. Mar 17 18:42:25.196656 systemd[1]: Started cri-containerd-99622f7747a7d1d300ea2a7594ad5172eac3f7cb7f62c4b3c16f9a33464deafe.scope. Mar 17 18:42:25.198252 systemd[1]: Started cri-containerd-ec7922bcaf4a5ca6079b56155b3c1234887fc148044e1db5a3548cb0cf41be1b.scope. Mar 17 18:42:25.209186 systemd-resolved[1147]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:42:25.211176 systemd-resolved[1147]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:42:25.232206 env[1208]: time="2025-03-17T18:42:25.232160013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kggkc,Uid:cef4fd3e-2e37-4ed3-a7f2-270b3fbd675d,Namespace:kube-system,Attempt:0,} returns sandbox id \"99622f7747a7d1d300ea2a7594ad5172eac3f7cb7f62c4b3c16f9a33464deafe\"" Mar 17 18:42:25.233503 kubelet[1899]: E0317 18:42:25.233044 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:25.236380 env[1208]: time="2025-03-17T18:42:25.236334717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rwnhh,Uid:ad7e6ede-f750-4110-8a8f-9a6a41dcffbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec7922bcaf4a5ca6079b56155b3c1234887fc148044e1db5a3548cb0cf41be1b\"" Mar 17 18:42:25.236903 env[1208]: time="2025-03-17T18:42:25.236849727Z" level=info msg="CreateContainer within sandbox \"99622f7747a7d1d300ea2a7594ad5172eac3f7cb7f62c4b3c16f9a33464deafe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:42:25.236972 kubelet[1899]: E0317 18:42:25.236937 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:25.238566 env[1208]: time="2025-03-17T18:42:25.238533009Z" level=info msg="CreateContainer within sandbox \"ec7922bcaf4a5ca6079b56155b3c1234887fc148044e1db5a3548cb0cf41be1b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:42:25.464588 env[1208]: time="2025-03-17T18:42:25.464459644Z" level=info msg="CreateContainer within sandbox \"99622f7747a7d1d300ea2a7594ad5172eac3f7cb7f62c4b3c16f9a33464deafe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"73f5ec30831f14209de7975e875d01897dfcb7176333fec035cc55a38649d788\"" Mar 17 18:42:25.465269 env[1208]: time="2025-03-17T18:42:25.465241167Z" level=info msg="StartContainer for \"73f5ec30831f14209de7975e875d01897dfcb7176333fec035cc55a38649d788\"" Mar 17 18:42:25.470401 env[1208]: time="2025-03-17T18:42:25.470363396Z" level=info msg="CreateContainer within sandbox \"ec7922bcaf4a5ca6079b56155b3c1234887fc148044e1db5a3548cb0cf41be1b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8399527863e3e23a8156d8e3f5f01fea7b8329e3dd4f3ab45fe0130314935371\"" Mar 17 18:42:25.470877 env[1208]: time="2025-03-17T18:42:25.470831037Z" level=info msg="StartContainer for \"8399527863e3e23a8156d8e3f5f01fea7b8329e3dd4f3ab45fe0130314935371\"" Mar 17 18:42:25.487135 systemd[1]: Started cri-containerd-73f5ec30831f14209de7975e875d01897dfcb7176333fec035cc55a38649d788.scope. Mar 17 18:42:25.490216 systemd[1]: Started cri-containerd-8399527863e3e23a8156d8e3f5f01fea7b8329e3dd4f3ab45fe0130314935371.scope. Mar 17 18:42:25.519002 env[1208]: time="2025-03-17T18:42:25.518950095Z" level=info msg="StartContainer for \"73f5ec30831f14209de7975e875d01897dfcb7176333fec035cc55a38649d788\" returns successfully" Mar 17 18:42:25.521256 env[1208]: time="2025-03-17T18:42:25.521215182Z" level=info msg="StartContainer for \"8399527863e3e23a8156d8e3f5f01fea7b8329e3dd4f3ab45fe0130314935371\" returns successfully" Mar 17 18:42:26.169214 systemd[1]: run-containerd-runc-k8s.io-ec7922bcaf4a5ca6079b56155b3c1234887fc148044e1db5a3548cb0cf41be1b-runc.0d3ZQr.mount: Deactivated successfully. Mar 17 18:42:26.188445 kubelet[1899]: E0317 18:42:26.188420 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:26.189650 kubelet[1899]: E0317 18:42:26.189632 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:26.209431 kubelet[1899]: I0317 18:42:26.209365 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rwnhh" podStartSLOduration=27.209344116 podStartE2EDuration="27.209344116s" podCreationTimestamp="2025-03-17 18:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:42:26.197639723 +0000 UTC m=+34.174531095" watchObservedRunningTime="2025-03-17 18:42:26.209344116 +0000 UTC m=+34.186235458" Mar 17 18:42:26.218664 kubelet[1899]: I0317 18:42:26.218603 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kggkc" podStartSLOduration=27.218579639 podStartE2EDuration="27.218579639s" podCreationTimestamp="2025-03-17 18:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:42:26.210097825 +0000 UTC m=+34.186989167" watchObservedRunningTime="2025-03-17 18:42:26.218579639 +0000 UTC m=+34.195471012" Mar 17 18:42:27.191412 kubelet[1899]: E0317 18:42:27.191379 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:27.191831 kubelet[1899]: E0317 18:42:27.191496 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:28.193166 kubelet[1899]: E0317 18:42:28.193104 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:28.193628 kubelet[1899]: E0317 18:42:28.193345 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:28.426319 systemd[1]: Started sshd@7-10.0.0.88:22-10.0.0.1:48852.service. Mar 17 18:42:28.461367 sshd[3311]: Accepted publickey for core from 10.0.0.1 port 48852 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:42:28.462813 sshd[3311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:28.466642 systemd-logind[1193]: New session 8 of user core. Mar 17 18:42:28.467694 systemd[1]: Started session-8.scope. Mar 17 18:42:28.649877 sshd[3311]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:28.652096 systemd[1]: sshd@7-10.0.0.88:22-10.0.0.1:48852.service: Deactivated successfully. Mar 17 18:42:28.652846 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:42:28.653603 systemd-logind[1193]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:42:28.654309 systemd-logind[1193]: Removed session 8. Mar 17 18:42:33.654720 systemd[1]: Started sshd@8-10.0.0.88:22-10.0.0.1:48862.service. Mar 17 18:42:33.683688 sshd[3327]: Accepted publickey for core from 10.0.0.1 port 48862 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:42:33.684838 sshd[3327]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:33.688223 systemd-logind[1193]: New session 9 of user core. Mar 17 18:42:33.689060 systemd[1]: Started session-9.scope. Mar 17 18:42:33.799671 sshd[3327]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:33.802247 systemd[1]: sshd@8-10.0.0.88:22-10.0.0.1:48862.service: Deactivated successfully. Mar 17 18:42:33.802988 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:42:33.803649 systemd-logind[1193]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:42:33.804345 systemd-logind[1193]: Removed session 9. Mar 17 18:42:38.803015 systemd[1]: Started sshd@9-10.0.0.88:22-10.0.0.1:33704.service. Mar 17 18:42:38.833232 sshd[3341]: Accepted publickey for core from 10.0.0.1 port 33704 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:42:38.834293 sshd[3341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:38.837836 systemd-logind[1193]: New session 10 of user core. Mar 17 18:42:38.838962 systemd[1]: Started session-10.scope. Mar 17 18:42:38.946700 sshd[3341]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:38.949706 systemd[1]: sshd@9-10.0.0.88:22-10.0.0.1:33704.service: Deactivated successfully. Mar 17 18:42:38.950305 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:42:38.950873 systemd-logind[1193]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:42:38.951977 systemd[1]: Started sshd@10-10.0.0.88:22-10.0.0.1:33708.service. Mar 17 18:42:38.952875 systemd-logind[1193]: Removed session 10. Mar 17 18:42:38.979638 sshd[3356]: Accepted publickey for core from 10.0.0.1 port 33708 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:42:38.980529 sshd[3356]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:38.983608 systemd-logind[1193]: New session 11 of user core. Mar 17 18:42:38.984550 systemd[1]: Started session-11.scope. Mar 17 18:42:39.123963 sshd[3356]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:39.127708 systemd[1]: sshd@10-10.0.0.88:22-10.0.0.1:33708.service: Deactivated successfully. Mar 17 18:42:39.128260 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:42:39.129209 systemd-logind[1193]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:42:39.130089 systemd[1]: Started sshd@11-10.0.0.88:22-10.0.0.1:33710.service. Mar 17 18:42:39.131214 systemd-logind[1193]: Removed session 11. Mar 17 18:42:39.165939 sshd[3367]: Accepted publickey for core from 10.0.0.1 port 33710 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:42:39.167281 sshd[3367]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:39.170606 systemd-logind[1193]: New session 12 of user core. Mar 17 18:42:39.171413 systemd[1]: Started session-12.scope. Mar 17 18:42:39.272591 sshd[3367]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:39.274969 systemd[1]: sshd@11-10.0.0.88:22-10.0.0.1:33710.service: Deactivated successfully. Mar 17 18:42:39.275747 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:42:39.276485 systemd-logind[1193]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:42:39.277303 systemd-logind[1193]: Removed session 12. Mar 17 18:42:44.277930 systemd[1]: Started sshd@12-10.0.0.88:22-10.0.0.1:40574.service. Mar 17 18:42:44.309160 sshd[3381]: Accepted publickey for core from 10.0.0.1 port 40574 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:42:44.310387 sshd[3381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:44.313860 systemd-logind[1193]: New session 13 of user core. Mar 17 18:42:44.314948 systemd[1]: Started session-13.scope. Mar 17 18:42:44.419621 sshd[3381]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:44.422010 systemd[1]: sshd@12-10.0.0.88:22-10.0.0.1:40574.service: Deactivated successfully. Mar 17 18:42:44.422759 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:42:44.423535 systemd-logind[1193]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:42:44.424206 systemd-logind[1193]: Removed session 13. Mar 17 18:42:49.423561 systemd[1]: Started sshd@13-10.0.0.88:22-10.0.0.1:40584.service. Mar 17 18:42:49.453623 sshd[3396]: Accepted publickey for core from 10.0.0.1 port 40584 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:42:49.454602 sshd[3396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:49.458079 systemd-logind[1193]: New session 14 of user core. Mar 17 18:42:49.459021 systemd[1]: Started session-14.scope. Mar 17 18:42:49.560561 sshd[3396]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:49.563080 systemd[1]: sshd@13-10.0.0.88:22-10.0.0.1:40584.service: Deactivated successfully. Mar 17 18:42:49.563692 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:42:49.564295 systemd-logind[1193]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:42:49.565474 systemd[1]: Started sshd@14-10.0.0.88:22-10.0.0.1:40594.service. Mar 17 18:42:49.568489 systemd-logind[1193]: Removed session 14. Mar 17 18:42:49.594001 sshd[3409]: Accepted publickey for core from 10.0.0.1 port 40594 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:42:49.595009 sshd[3409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:49.597927 systemd-logind[1193]: New session 15 of user core. Mar 17 18:42:49.598894 systemd[1]: Started session-15.scope. Mar 17 18:42:49.886454 sshd[3409]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:49.888544 systemd[1]: sshd@14-10.0.0.88:22-10.0.0.1:40594.service: Deactivated successfully. Mar 17 18:42:49.889064 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:42:49.889567 systemd-logind[1193]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:42:49.890769 systemd[1]: Started sshd@15-10.0.0.88:22-10.0.0.1:40610.service. Mar 17 18:42:49.891527 systemd-logind[1193]: Removed session 15. Mar 17 18:42:49.921430 sshd[3421]: Accepted publickey for core from 10.0.0.1 port 40610 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:42:49.922343 sshd[3421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:49.925258 systemd-logind[1193]: New session 16 of user core. Mar 17 18:42:49.926131 systemd[1]: Started session-16.scope. Mar 17 18:42:50.794785 sshd[3421]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:50.797612 systemd[1]: sshd@15-10.0.0.88:22-10.0.0.1:40610.service: Deactivated successfully. Mar 17 18:42:50.798126 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:42:50.800213 systemd[1]: Started sshd@16-10.0.0.88:22-10.0.0.1:40616.service. Mar 17 18:42:50.801663 systemd-logind[1193]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:42:50.803153 systemd-logind[1193]: Removed session 16. Mar 17 18:42:50.831939 sshd[3438]: Accepted publickey for core from 10.0.0.1 port 40616 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:42:50.833006 sshd[3438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:50.836910 systemd-logind[1193]: New session 17 of user core. Mar 17 18:42:50.837653 systemd[1]: Started session-17.scope. Mar 17 18:42:51.071505 sshd[3438]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:51.075379 systemd[1]: Started sshd@17-10.0.0.88:22-10.0.0.1:40628.service. Mar 17 18:42:51.075846 systemd[1]: sshd@16-10.0.0.88:22-10.0.0.1:40616.service: Deactivated successfully. Mar 17 18:42:51.076549 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:42:51.077303 systemd-logind[1193]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:42:51.078344 systemd-logind[1193]: Removed session 17. Mar 17 18:42:51.107039 sshd[3450]: Accepted publickey for core from 10.0.0.1 port 40628 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:42:51.108733 sshd[3450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:51.115794 systemd-logind[1193]: New session 18 of user core. Mar 17 18:42:51.117005 systemd[1]: Started session-18.scope. Mar 17 18:42:51.222839 sshd[3450]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:51.225763 systemd[1]: sshd@17-10.0.0.88:22-10.0.0.1:40628.service: Deactivated successfully. Mar 17 18:42:51.226723 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:42:51.227377 systemd-logind[1193]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:42:51.228224 systemd-logind[1193]: Removed session 18. Mar 17 18:42:56.228001 systemd[1]: Started sshd@18-10.0.0.88:22-10.0.0.1:43284.service. Mar 17 18:42:56.256322 sshd[3469]: Accepted publickey for core from 10.0.0.1 port 43284 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:42:56.257396 sshd[3469]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:56.260616 systemd-logind[1193]: New session 19 of user core. Mar 17 18:42:56.261460 systemd[1]: Started session-19.scope. Mar 17 18:42:56.360459 sshd[3469]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:56.363067 systemd[1]: sshd@18-10.0.0.88:22-10.0.0.1:43284.service: Deactivated successfully. Mar 17 18:42:56.363939 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:42:56.364565 systemd-logind[1193]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:42:56.365268 systemd-logind[1193]: Removed session 19. Mar 17 18:43:01.366021 systemd[1]: Started sshd@19-10.0.0.88:22-10.0.0.1:43296.service. Mar 17 18:43:01.395107 sshd[3486]: Accepted publickey for core from 10.0.0.1 port 43296 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:43:01.396326 sshd[3486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:01.399664 systemd-logind[1193]: New session 20 of user core. Mar 17 18:43:01.400502 systemd[1]: Started session-20.scope. Mar 17 18:43:01.505606 sshd[3486]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:01.508551 systemd[1]: sshd@19-10.0.0.88:22-10.0.0.1:43296.service: Deactivated successfully. Mar 17 18:43:01.509432 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:43:01.510280 systemd-logind[1193]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:43:01.511092 systemd-logind[1193]: Removed session 20. Mar 17 18:43:06.510673 systemd[1]: Started sshd@20-10.0.0.88:22-10.0.0.1:38086.service. Mar 17 18:43:06.539664 sshd[3499]: Accepted publickey for core from 10.0.0.1 port 38086 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:43:06.540812 sshd[3499]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:06.544510 systemd-logind[1193]: New session 21 of user core. Mar 17 18:43:06.545563 systemd[1]: Started session-21.scope. Mar 17 18:43:06.655931 sshd[3499]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:06.658801 systemd[1]: sshd@20-10.0.0.88:22-10.0.0.1:38086.service: Deactivated successfully. Mar 17 18:43:06.659546 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:43:06.660090 systemd-logind[1193]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:43:06.660738 systemd-logind[1193]: Removed session 21. Mar 17 18:43:07.112339 kubelet[1899]: E0317 18:43:07.112279 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:11.661024 systemd[1]: Started sshd@21-10.0.0.88:22-10.0.0.1:38092.service. Mar 17 18:43:11.695131 sshd[3513]: Accepted publickey for core from 10.0.0.1 port 38092 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:43:11.696192 sshd[3513]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:11.699497 systemd-logind[1193]: New session 22 of user core. Mar 17 18:43:11.700512 systemd[1]: Started session-22.scope. Mar 17 18:43:11.808001 sshd[3513]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:11.811125 systemd[1]: sshd@21-10.0.0.88:22-10.0.0.1:38092.service: Deactivated successfully. Mar 17 18:43:11.811776 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:43:11.812404 systemd-logind[1193]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:43:11.813642 systemd[1]: Started sshd@22-10.0.0.88:22-10.0.0.1:38096.service. Mar 17 18:43:11.814937 systemd-logind[1193]: Removed session 22. Mar 17 18:43:11.841476 sshd[3527]: Accepted publickey for core from 10.0.0.1 port 38096 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:43:11.842536 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:11.845805 systemd-logind[1193]: New session 23 of user core. Mar 17 18:43:11.846729 systemd[1]: Started session-23.scope. Mar 17 18:43:13.112135 kubelet[1899]: E0317 18:43:13.112082 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:13.284640 env[1208]: time="2025-03-17T18:43:13.284589449Z" level=info msg="StopContainer for \"a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5\" with timeout 30 (s)" Mar 17 18:43:13.285460 env[1208]: time="2025-03-17T18:43:13.285258177Z" level=info msg="Stop container \"a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5\" with signal terminated" Mar 17 18:43:13.291508 env[1208]: time="2025-03-17T18:43:13.291438152Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:43:13.296253 env[1208]: time="2025-03-17T18:43:13.296213334Z" level=info msg="StopContainer for \"37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2\" with timeout 2 (s)" Mar 17 18:43:13.296754 env[1208]: time="2025-03-17T18:43:13.296458676Z" level=info msg="Stop container \"37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2\" with signal terminated" Mar 17 18:43:13.297073 systemd[1]: cri-containerd-a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5.scope: Deactivated successfully. Mar 17 18:43:13.304117 systemd-networkd[1031]: lxc_health: Link DOWN Mar 17 18:43:13.304125 systemd-networkd[1031]: lxc_health: Lost carrier Mar 17 18:43:13.314175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5-rootfs.mount: Deactivated successfully. Mar 17 18:43:13.335715 env[1208]: time="2025-03-17T18:43:13.335655546Z" level=info msg="shim disconnected" id=a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5 Mar 17 18:43:13.335938 env[1208]: time="2025-03-17T18:43:13.335715742Z" level=warning msg="cleaning up after shim disconnected" id=a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5 namespace=k8s.io Mar 17 18:43:13.335938 env[1208]: time="2025-03-17T18:43:13.335732423Z" level=info msg="cleaning up dead shim" Mar 17 18:43:13.343001 env[1208]: time="2025-03-17T18:43:13.342837110Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3583 runtime=io.containerd.runc.v2\n" Mar 17 18:43:13.346795 env[1208]: time="2025-03-17T18:43:13.346754149Z" level=info msg="StopContainer for \"a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5\" returns successfully" Mar 17 18:43:13.348063 env[1208]: time="2025-03-17T18:43:13.348034704Z" level=info msg="StopPodSandbox for \"9096e82f36b6e4fb868b9f2149d4763d248e25064144deb0757d7947ca07beaa\"" Mar 17 18:43:13.348197 env[1208]: time="2025-03-17T18:43:13.348100801Z" level=info msg="Container to stop \"a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:43:13.350444 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9096e82f36b6e4fb868b9f2149d4763d248e25064144deb0757d7947ca07beaa-shm.mount: Deactivated successfully. Mar 17 18:43:13.351537 systemd[1]: cri-containerd-37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2.scope: Deactivated successfully. Mar 17 18:43:13.351836 systemd[1]: cri-containerd-37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2.scope: Consumed 5.914s CPU time. Mar 17 18:43:13.356769 systemd[1]: cri-containerd-9096e82f36b6e4fb868b9f2149d4763d248e25064144deb0757d7947ca07beaa.scope: Deactivated successfully. Mar 17 18:43:13.370068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2-rootfs.mount: Deactivated successfully. Mar 17 18:43:13.374438 env[1208]: time="2025-03-17T18:43:13.374390677Z" level=info msg="shim disconnected" id=37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2 Mar 17 18:43:13.374646 env[1208]: time="2025-03-17T18:43:13.374619958Z" level=warning msg="cleaning up after shim disconnected" id=37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2 namespace=k8s.io Mar 17 18:43:13.374765 env[1208]: time="2025-03-17T18:43:13.374745880Z" level=info msg="cleaning up dead shim" Mar 17 18:43:13.380338 env[1208]: time="2025-03-17T18:43:13.380281185Z" level=info msg="shim disconnected" id=9096e82f36b6e4fb868b9f2149d4763d248e25064144deb0757d7947ca07beaa Mar 17 18:43:13.381044 env[1208]: time="2025-03-17T18:43:13.381025408Z" level=warning msg="cleaning up after shim disconnected" id=9096e82f36b6e4fb868b9f2149d4763d248e25064144deb0757d7947ca07beaa namespace=k8s.io Mar 17 18:43:13.381126 env[1208]: time="2025-03-17T18:43:13.381107265Z" level=info msg="cleaning up dead shim" Mar 17 18:43:13.384246 env[1208]: time="2025-03-17T18:43:13.384195219Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3627 runtime=io.containerd.runc.v2\n" Mar 17 18:43:13.388065 env[1208]: time="2025-03-17T18:43:13.388038247Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3638 runtime=io.containerd.runc.v2\n" Mar 17 18:43:13.425345 env[1208]: time="2025-03-17T18:43:13.425284172Z" level=info msg="TearDown network for sandbox \"9096e82f36b6e4fb868b9f2149d4763d248e25064144deb0757d7947ca07beaa\" successfully" Mar 17 18:43:13.425345 env[1208]: time="2025-03-17T18:43:13.425319530Z" level=info msg="StopPodSandbox for \"9096e82f36b6e4fb868b9f2149d4763d248e25064144deb0757d7947ca07beaa\" returns successfully" Mar 17 18:43:13.459296 env[1208]: time="2025-03-17T18:43:13.459252055Z" level=info msg="StopContainer for \"37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2\" returns successfully" Mar 17 18:43:13.459556 env[1208]: time="2025-03-17T18:43:13.459526444Z" level=info msg="StopPodSandbox for \"2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273\"" Mar 17 18:43:13.460001 env[1208]: time="2025-03-17T18:43:13.459576850Z" level=info msg="Container to stop \"395475e11573e35094888432f96b427495e802e1236af93653529def8cec1fd5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:43:13.460088 env[1208]: time="2025-03-17T18:43:13.460000045Z" level=info msg="Container to stop \"dfcb134671e131ad7f94806bdcd705178b45c196e2de1b43277c880b37e92363\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:43:13.460088 env[1208]: time="2025-03-17T18:43:13.460014844Z" level=info msg="Container to stop \"dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:43:13.460088 env[1208]: time="2025-03-17T18:43:13.460024393Z" level=info msg="Container to stop \"c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:43:13.460088 env[1208]: time="2025-03-17T18:43:13.460033099Z" level=info msg="Container to stop \"37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:43:13.463131 kubelet[1899]: I0317 18:43:13.463103 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp5wp\" (UniqueName: \"kubernetes.io/projected/06bff352-5055-4d90-aed6-7b0b9753ac82-kube-api-access-dp5wp\") pod \"06bff352-5055-4d90-aed6-7b0b9753ac82\" (UID: \"06bff352-5055-4d90-aed6-7b0b9753ac82\") " Mar 17 18:43:13.463235 kubelet[1899]: I0317 18:43:13.463148 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06bff352-5055-4d90-aed6-7b0b9753ac82-cilium-config-path\") pod \"06bff352-5055-4d90-aed6-7b0b9753ac82\" (UID: \"06bff352-5055-4d90-aed6-7b0b9753ac82\") " Mar 17 18:43:13.465634 systemd[1]: cri-containerd-2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273.scope: Deactivated successfully. Mar 17 18:43:13.466557 kubelet[1899]: I0317 18:43:13.465967 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06bff352-5055-4d90-aed6-7b0b9753ac82-kube-api-access-dp5wp" (OuterVolumeSpecName: "kube-api-access-dp5wp") pod "06bff352-5055-4d90-aed6-7b0b9753ac82" (UID: "06bff352-5055-4d90-aed6-7b0b9753ac82"). InnerVolumeSpecName "kube-api-access-dp5wp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 18:43:13.468675 kubelet[1899]: I0317 18:43:13.468637 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06bff352-5055-4d90-aed6-7b0b9753ac82-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "06bff352-5055-4d90-aed6-7b0b9753ac82" (UID: "06bff352-5055-4d90-aed6-7b0b9753ac82"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 18:43:13.532222 env[1208]: time="2025-03-17T18:43:13.532134526Z" level=info msg="shim disconnected" id=2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273 Mar 17 18:43:13.532222 env[1208]: time="2025-03-17T18:43:13.532213839Z" level=warning msg="cleaning up after shim disconnected" id=2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273 namespace=k8s.io Mar 17 18:43:13.532222 env[1208]: time="2025-03-17T18:43:13.532227354Z" level=info msg="cleaning up dead shim" Mar 17 18:43:13.539105 env[1208]: time="2025-03-17T18:43:13.539055538Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3671 runtime=io.containerd.runc.v2\n" Mar 17 18:43:13.540261 env[1208]: time="2025-03-17T18:43:13.540221512Z" level=info msg="TearDown network for sandbox \"2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273\" successfully" Mar 17 18:43:13.540261 env[1208]: time="2025-03-17T18:43:13.540254326Z" level=info msg="StopPodSandbox for \"2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273\" returns successfully" Mar 17 18:43:13.563862 kubelet[1899]: I0317 18:43:13.563811 1899 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06bff352-5055-4d90-aed6-7b0b9753ac82-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:13.563862 kubelet[1899]: I0317 18:43:13.563838 1899 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dp5wp\" (UniqueName: \"kubernetes.io/projected/06bff352-5055-4d90-aed6-7b0b9753ac82-kube-api-access-dp5wp\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:13.664642 kubelet[1899]: I0317 18:43:13.664512 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-xtables-lock\") pod \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " Mar 17 18:43:13.664642 kubelet[1899]: I0317 18:43:13.664553 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-host-proc-sys-kernel\") pod \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " Mar 17 18:43:13.664642 kubelet[1899]: I0317 18:43:13.664580 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f557670-6ff9-4cba-8ec5-9ea555a65a13-hubble-tls\") pod \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " Mar 17 18:43:13.664642 kubelet[1899]: I0317 18:43:13.664597 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-cilium-run\") pod \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " Mar 17 18:43:13.664642 kubelet[1899]: I0317 18:43:13.664616 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqzx7\" (UniqueName: \"kubernetes.io/projected/7f557670-6ff9-4cba-8ec5-9ea555a65a13-kube-api-access-nqzx7\") pod \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " Mar 17 18:43:13.664642 kubelet[1899]: I0317 18:43:13.664616 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7f557670-6ff9-4cba-8ec5-9ea555a65a13" (UID: "7f557670-6ff9-4cba-8ec5-9ea555a65a13"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:43:13.664962 kubelet[1899]: I0317 18:43:13.664633 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-etc-cni-netd\") pod \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " Mar 17 18:43:13.664962 kubelet[1899]: I0317 18:43:13.664672 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7f557670-6ff9-4cba-8ec5-9ea555a65a13" (UID: "7f557670-6ff9-4cba-8ec5-9ea555a65a13"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:43:13.664962 kubelet[1899]: I0317 18:43:13.664690 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-cni-path\") pod \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " Mar 17 18:43:13.664962 kubelet[1899]: I0317 18:43:13.664696 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7f557670-6ff9-4cba-8ec5-9ea555a65a13" (UID: "7f557670-6ff9-4cba-8ec5-9ea555a65a13"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:43:13.664962 kubelet[1899]: I0317 18:43:13.664712 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-bpf-maps\") pod \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " Mar 17 18:43:13.665109 kubelet[1899]: I0317 18:43:13.664731 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f557670-6ff9-4cba-8ec5-9ea555a65a13-clustermesh-secrets\") pod \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " Mar 17 18:43:13.665109 kubelet[1899]: I0317 18:43:13.664747 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-lib-modules\") pod \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " Mar 17 18:43:13.665109 kubelet[1899]: I0317 18:43:13.664762 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f557670-6ff9-4cba-8ec5-9ea555a65a13-cilium-config-path\") pod \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " Mar 17 18:43:13.665109 kubelet[1899]: I0317 18:43:13.664775 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-cilium-cgroup\") pod \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " Mar 17 18:43:13.665109 kubelet[1899]: I0317 18:43:13.664787 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-host-proc-sys-net\") pod \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " Mar 17 18:43:13.665109 kubelet[1899]: I0317 18:43:13.664798 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-hostproc\") pod \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\" (UID: \"7f557670-6ff9-4cba-8ec5-9ea555a65a13\") " Mar 17 18:43:13.665288 kubelet[1899]: I0317 18:43:13.664834 1899 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:13.665288 kubelet[1899]: I0317 18:43:13.664843 1899 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:13.665288 kubelet[1899]: I0317 18:43:13.664851 1899 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:13.665288 kubelet[1899]: I0317 18:43:13.664866 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-hostproc" (OuterVolumeSpecName: "hostproc") pod "7f557670-6ff9-4cba-8ec5-9ea555a65a13" (UID: "7f557670-6ff9-4cba-8ec5-9ea555a65a13"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:43:13.665288 kubelet[1899]: I0317 18:43:13.664879 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-cni-path" (OuterVolumeSpecName: "cni-path") pod "7f557670-6ff9-4cba-8ec5-9ea555a65a13" (UID: "7f557670-6ff9-4cba-8ec5-9ea555a65a13"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:43:13.665288 kubelet[1899]: I0317 18:43:13.664894 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7f557670-6ff9-4cba-8ec5-9ea555a65a13" (UID: "7f557670-6ff9-4cba-8ec5-9ea555a65a13"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:43:13.665457 kubelet[1899]: I0317 18:43:13.665014 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7f557670-6ff9-4cba-8ec5-9ea555a65a13" (UID: "7f557670-6ff9-4cba-8ec5-9ea555a65a13"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:43:13.665926 kubelet[1899]: I0317 18:43:13.665892 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7f557670-6ff9-4cba-8ec5-9ea555a65a13" (UID: "7f557670-6ff9-4cba-8ec5-9ea555a65a13"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:43:13.666151 kubelet[1899]: I0317 18:43:13.665904 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7f557670-6ff9-4cba-8ec5-9ea555a65a13" (UID: "7f557670-6ff9-4cba-8ec5-9ea555a65a13"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:43:13.666253 kubelet[1899]: I0317 18:43:13.665923 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7f557670-6ff9-4cba-8ec5-9ea555a65a13" (UID: "7f557670-6ff9-4cba-8ec5-9ea555a65a13"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:43:13.667714 kubelet[1899]: I0317 18:43:13.667694 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f557670-6ff9-4cba-8ec5-9ea555a65a13-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7f557670-6ff9-4cba-8ec5-9ea555a65a13" (UID: "7f557670-6ff9-4cba-8ec5-9ea555a65a13"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 18:43:13.667817 kubelet[1899]: I0317 18:43:13.667730 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f557670-6ff9-4cba-8ec5-9ea555a65a13-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7f557670-6ff9-4cba-8ec5-9ea555a65a13" (UID: "7f557670-6ff9-4cba-8ec5-9ea555a65a13"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 18:43:13.668518 kubelet[1899]: I0317 18:43:13.668477 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f557670-6ff9-4cba-8ec5-9ea555a65a13-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7f557670-6ff9-4cba-8ec5-9ea555a65a13" (UID: "7f557670-6ff9-4cba-8ec5-9ea555a65a13"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 18:43:13.668780 kubelet[1899]: I0317 18:43:13.668743 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f557670-6ff9-4cba-8ec5-9ea555a65a13-kube-api-access-nqzx7" (OuterVolumeSpecName: "kube-api-access-nqzx7") pod "7f557670-6ff9-4cba-8ec5-9ea555a65a13" (UID: "7f557670-6ff9-4cba-8ec5-9ea555a65a13"). InnerVolumeSpecName "kube-api-access-nqzx7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 18:43:13.765677 kubelet[1899]: I0317 18:43:13.765614 1899 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:13.765677 kubelet[1899]: I0317 18:43:13.765656 1899 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:13.765677 kubelet[1899]: I0317 18:43:13.765664 1899 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:13.765677 kubelet[1899]: I0317 18:43:13.765672 1899 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f557670-6ff9-4cba-8ec5-9ea555a65a13-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:13.765677 kubelet[1899]: I0317 18:43:13.765680 1899 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:13.765677 kubelet[1899]: I0317 18:43:13.765688 1899 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nqzx7\" (UniqueName: \"kubernetes.io/projected/7f557670-6ff9-4cba-8ec5-9ea555a65a13-kube-api-access-nqzx7\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:13.765677 kubelet[1899]: I0317 18:43:13.765695 1899 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:13.765998 kubelet[1899]: I0317 18:43:13.765703 1899 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:13.765998 kubelet[1899]: I0317 18:43:13.765711 1899 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f557670-6ff9-4cba-8ec5-9ea555a65a13-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:13.765998 kubelet[1899]: I0317 18:43:13.765719 1899 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f557670-6ff9-4cba-8ec5-9ea555a65a13-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:13.765998 kubelet[1899]: I0317 18:43:13.765726 1899 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f557670-6ff9-4cba-8ec5-9ea555a65a13-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:14.118751 systemd[1]: Removed slice kubepods-burstable-pod7f557670_6ff9_4cba_8ec5_9ea555a65a13.slice. Mar 17 18:43:14.118830 systemd[1]: kubepods-burstable-pod7f557670_6ff9_4cba_8ec5_9ea555a65a13.slice: Consumed 6.012s CPU time. Mar 17 18:43:14.120073 systemd[1]: Removed slice kubepods-besteffort-pod06bff352_5055_4d90_aed6_7b0b9753ac82.slice. Mar 17 18:43:14.273197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273-rootfs.mount: Deactivated successfully. Mar 17 18:43:14.273335 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2942f339b97de465ec206ecbaca34fa0be7c4f7ca7774c286ff7fccf1e8e5273-shm.mount: Deactivated successfully. Mar 17 18:43:14.273415 systemd[1]: var-lib-kubelet-pods-7f557670\x2d6ff9\x2d4cba\x2d8ec5\x2d9ea555a65a13-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnqzx7.mount: Deactivated successfully. Mar 17 18:43:14.273497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9096e82f36b6e4fb868b9f2149d4763d248e25064144deb0757d7947ca07beaa-rootfs.mount: Deactivated successfully. Mar 17 18:43:14.273570 systemd[1]: var-lib-kubelet-pods-06bff352\x2d5055\x2d4d90\x2daed6\x2d7b0b9753ac82-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddp5wp.mount: Deactivated successfully. Mar 17 18:43:14.273642 systemd[1]: var-lib-kubelet-pods-7f557670\x2d6ff9\x2d4cba\x2d8ec5\x2d9ea555a65a13-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:43:14.273713 systemd[1]: var-lib-kubelet-pods-7f557670\x2d6ff9\x2d4cba\x2d8ec5\x2d9ea555a65a13-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:43:14.278630 kubelet[1899]: I0317 18:43:14.276939 1899 scope.go:117] "RemoveContainer" containerID="37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2" Mar 17 18:43:14.279287 env[1208]: time="2025-03-17T18:43:14.279036239Z" level=info msg="RemoveContainer for \"37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2\"" Mar 17 18:43:14.285601 env[1208]: time="2025-03-17T18:43:14.285546853Z" level=info msg="RemoveContainer for \"37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2\" returns successfully" Mar 17 18:43:14.285940 kubelet[1899]: I0317 18:43:14.285786 1899 scope.go:117] "RemoveContainer" containerID="c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43" Mar 17 18:43:14.286807 env[1208]: time="2025-03-17T18:43:14.286767010Z" level=info msg="RemoveContainer for \"c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43\"" Mar 17 18:43:14.289882 env[1208]: time="2025-03-17T18:43:14.289849809Z" level=info msg="RemoveContainer for \"c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43\" returns successfully" Mar 17 18:43:14.290040 kubelet[1899]: I0317 18:43:14.290005 1899 scope.go:117] "RemoveContainer" containerID="dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be" Mar 17 18:43:14.291085 env[1208]: time="2025-03-17T18:43:14.291053876Z" level=info msg="RemoveContainer for \"dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be\"" Mar 17 18:43:14.295787 env[1208]: time="2025-03-17T18:43:14.295754157Z" level=info msg="RemoveContainer for \"dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be\" returns successfully" Mar 17 18:43:14.295937 kubelet[1899]: I0317 18:43:14.295916 1899 scope.go:117] "RemoveContainer" containerID="dfcb134671e131ad7f94806bdcd705178b45c196e2de1b43277c880b37e92363" Mar 17 18:43:14.296893 env[1208]: time="2025-03-17T18:43:14.296859934Z" level=info msg="RemoveContainer for \"dfcb134671e131ad7f94806bdcd705178b45c196e2de1b43277c880b37e92363\"" Mar 17 18:43:14.302936 env[1208]: time="2025-03-17T18:43:14.302897798Z" level=info msg="RemoveContainer for \"dfcb134671e131ad7f94806bdcd705178b45c196e2de1b43277c880b37e92363\" returns successfully" Mar 17 18:43:14.303118 kubelet[1899]: I0317 18:43:14.303075 1899 scope.go:117] "RemoveContainer" containerID="395475e11573e35094888432f96b427495e802e1236af93653529def8cec1fd5" Mar 17 18:43:14.303944 env[1208]: time="2025-03-17T18:43:14.303900326Z" level=info msg="RemoveContainer for \"395475e11573e35094888432f96b427495e802e1236af93653529def8cec1fd5\"" Mar 17 18:43:14.306819 env[1208]: time="2025-03-17T18:43:14.306788402Z" level=info msg="RemoveContainer for \"395475e11573e35094888432f96b427495e802e1236af93653529def8cec1fd5\" returns successfully" Mar 17 18:43:14.306973 kubelet[1899]: I0317 18:43:14.306943 1899 scope.go:117] "RemoveContainer" containerID="37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2" Mar 17 18:43:14.309936 env[1208]: time="2025-03-17T18:43:14.309872986Z" level=error msg="ContainerStatus for \"37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2\": not found" Mar 17 18:43:14.310042 kubelet[1899]: E0317 18:43:14.310021 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2\": not found" containerID="37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2" Mar 17 18:43:14.310169 kubelet[1899]: I0317 18:43:14.310054 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2"} err="failed to get container status \"37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"37bb9e67b8ad9d9a554a2a240b5111b2c63c420475859cc041006610e82f42d2\": not found" Mar 17 18:43:14.310210 kubelet[1899]: I0317 18:43:14.310170 1899 scope.go:117] "RemoveContainer" containerID="c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43" Mar 17 18:43:14.310393 env[1208]: time="2025-03-17T18:43:14.310335896Z" level=error msg="ContainerStatus for \"c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43\": not found" Mar 17 18:43:14.310501 kubelet[1899]: E0317 18:43:14.310479 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43\": not found" containerID="c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43" Mar 17 18:43:14.310552 kubelet[1899]: I0317 18:43:14.310505 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43"} err="failed to get container status \"c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43\": rpc error: code = NotFound desc = an error occurred when try to find container \"c8cad46968e97f09583c952c326c07f91f5a7fdf4a0a89327b3c10ed6632ea43\": not found" Mar 17 18:43:14.310552 kubelet[1899]: I0317 18:43:14.310524 1899 scope.go:117] "RemoveContainer" containerID="dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be" Mar 17 18:43:14.310768 env[1208]: time="2025-03-17T18:43:14.310711538Z" level=error msg="ContainerStatus for \"dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be\": not found" Mar 17 18:43:14.310895 kubelet[1899]: E0317 18:43:14.310869 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be\": not found" containerID="dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be" Mar 17 18:43:14.310948 kubelet[1899]: I0317 18:43:14.310899 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be"} err="failed to get container status \"dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd614b8543c33ddc1f1ca3794bace163ab3f47419c305ac9b27a7bd7f37904be\": not found" Mar 17 18:43:14.310948 kubelet[1899]: I0317 18:43:14.310920 1899 scope.go:117] "RemoveContainer" containerID="dfcb134671e131ad7f94806bdcd705178b45c196e2de1b43277c880b37e92363" Mar 17 18:43:14.311114 env[1208]: time="2025-03-17T18:43:14.311068816Z" level=error msg="ContainerStatus for \"dfcb134671e131ad7f94806bdcd705178b45c196e2de1b43277c880b37e92363\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dfcb134671e131ad7f94806bdcd705178b45c196e2de1b43277c880b37e92363\": not found" Mar 17 18:43:14.311215 kubelet[1899]: E0317 18:43:14.311197 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dfcb134671e131ad7f94806bdcd705178b45c196e2de1b43277c880b37e92363\": not found" containerID="dfcb134671e131ad7f94806bdcd705178b45c196e2de1b43277c880b37e92363" Mar 17 18:43:14.311261 kubelet[1899]: I0317 18:43:14.311218 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dfcb134671e131ad7f94806bdcd705178b45c196e2de1b43277c880b37e92363"} err="failed to get container status \"dfcb134671e131ad7f94806bdcd705178b45c196e2de1b43277c880b37e92363\": rpc error: code = NotFound desc = an error occurred when try to find container \"dfcb134671e131ad7f94806bdcd705178b45c196e2de1b43277c880b37e92363\": not found" Mar 17 18:43:14.311261 kubelet[1899]: I0317 18:43:14.311243 1899 scope.go:117] "RemoveContainer" containerID="395475e11573e35094888432f96b427495e802e1236af93653529def8cec1fd5" Mar 17 18:43:14.311422 env[1208]: time="2025-03-17T18:43:14.311382439Z" level=error msg="ContainerStatus for \"395475e11573e35094888432f96b427495e802e1236af93653529def8cec1fd5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"395475e11573e35094888432f96b427495e802e1236af93653529def8cec1fd5\": not found" Mar 17 18:43:14.311494 kubelet[1899]: E0317 18:43:14.311474 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"395475e11573e35094888432f96b427495e802e1236af93653529def8cec1fd5\": not found" containerID="395475e11573e35094888432f96b427495e802e1236af93653529def8cec1fd5" Mar 17 18:43:14.311525 kubelet[1899]: I0317 18:43:14.311493 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"395475e11573e35094888432f96b427495e802e1236af93653529def8cec1fd5"} err="failed to get container status \"395475e11573e35094888432f96b427495e802e1236af93653529def8cec1fd5\": rpc error: code = NotFound desc = an error occurred when try to find container \"395475e11573e35094888432f96b427495e802e1236af93653529def8cec1fd5\": not found" Mar 17 18:43:14.311525 kubelet[1899]: I0317 18:43:14.311505 1899 scope.go:117] "RemoveContainer" containerID="a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5" Mar 17 18:43:14.312269 env[1208]: time="2025-03-17T18:43:14.312243216Z" level=info msg="RemoveContainer for \"a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5\"" Mar 17 18:43:14.314862 env[1208]: time="2025-03-17T18:43:14.314838437Z" level=info msg="RemoveContainer for \"a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5\" returns successfully" Mar 17 18:43:14.314999 kubelet[1899]: I0317 18:43:14.314982 1899 scope.go:117] "RemoveContainer" containerID="a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5" Mar 17 18:43:14.315279 env[1208]: time="2025-03-17T18:43:14.315236242Z" level=error msg="ContainerStatus for \"a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5\": not found" Mar 17 18:43:14.315393 kubelet[1899]: E0317 18:43:14.315373 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5\": not found" containerID="a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5" Mar 17 18:43:14.315431 kubelet[1899]: I0317 18:43:14.315396 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5"} err="failed to get container status \"a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"a35b8b8a4aab1c8a21858357990daef521ea97fa8c4ea2078497cad3405f40b5\": not found" Mar 17 18:43:15.247932 sshd[3527]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:15.250702 systemd[1]: sshd@22-10.0.0.88:22-10.0.0.1:38096.service: Deactivated successfully. Mar 17 18:43:15.251408 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:43:15.252007 systemd-logind[1193]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:43:15.253392 systemd[1]: Started sshd@23-10.0.0.88:22-10.0.0.1:36184.service. Mar 17 18:43:15.254255 systemd-logind[1193]: Removed session 23. Mar 17 18:43:15.282071 sshd[3689]: Accepted publickey for core from 10.0.0.1 port 36184 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:43:15.283422 sshd[3689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:15.286802 systemd-logind[1193]: New session 24 of user core. Mar 17 18:43:15.287623 systemd[1]: Started session-24.scope. Mar 17 18:43:15.696995 sshd[3689]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:15.701303 systemd[1]: Started sshd@24-10.0.0.88:22-10.0.0.1:36190.service. Mar 17 18:43:15.703467 systemd[1]: sshd@23-10.0.0.88:22-10.0.0.1:36184.service: Deactivated successfully. Mar 17 18:43:15.704004 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:43:15.704670 systemd-logind[1193]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:43:15.705333 systemd-logind[1193]: Removed session 24. Mar 17 18:43:15.722480 kubelet[1899]: I0317 18:43:15.722432 1899 memory_manager.go:355] "RemoveStaleState removing state" podUID="06bff352-5055-4d90-aed6-7b0b9753ac82" containerName="cilium-operator" Mar 17 18:43:15.722480 kubelet[1899]: I0317 18:43:15.722462 1899 memory_manager.go:355] "RemoveStaleState removing state" podUID="7f557670-6ff9-4cba-8ec5-9ea555a65a13" containerName="cilium-agent" Mar 17 18:43:15.727410 systemd[1]: Created slice kubepods-burstable-pod25cdec4b_3ca1_407a_a8ea_a8a5a5df2de0.slice. Mar 17 18:43:15.731929 sshd[3700]: Accepted publickey for core from 10.0.0.1 port 36190 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:43:15.737950 sshd[3700]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:15.742670 systemd[1]: Started session-25.scope. Mar 17 18:43:15.743703 systemd-logind[1193]: New session 25 of user core. Mar 17 18:43:15.876060 kubelet[1899]: I0317 18:43:15.876008 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-xtables-lock\") pod \"cilium-lmvbc\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " pod="kube-system/cilium-lmvbc" Mar 17 18:43:15.876060 kubelet[1899]: I0317 18:43:15.876059 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-hubble-tls\") pod \"cilium-lmvbc\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " pod="kube-system/cilium-lmvbc" Mar 17 18:43:15.876293 kubelet[1899]: I0317 18:43:15.876091 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-host-proc-sys-net\") pod \"cilium-lmvbc\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " pod="kube-system/cilium-lmvbc" Mar 17 18:43:15.876293 kubelet[1899]: I0317 18:43:15.876120 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-host-proc-sys-kernel\") pod \"cilium-lmvbc\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " pod="kube-system/cilium-lmvbc" Mar 17 18:43:15.876293 kubelet[1899]: I0317 18:43:15.876174 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br7gv\" (UniqueName: \"kubernetes.io/projected/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-kube-api-access-br7gv\") pod \"cilium-lmvbc\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " pod="kube-system/cilium-lmvbc" Mar 17 18:43:15.876293 kubelet[1899]: I0317 18:43:15.876196 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-cilium-run\") pod \"cilium-lmvbc\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " pod="kube-system/cilium-lmvbc" Mar 17 18:43:15.876293 kubelet[1899]: I0317 18:43:15.876215 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-etc-cni-netd\") pod \"cilium-lmvbc\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " pod="kube-system/cilium-lmvbc" Mar 17 18:43:15.876506 kubelet[1899]: I0317 18:43:15.876230 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-clustermesh-secrets\") pod \"cilium-lmvbc\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " pod="kube-system/cilium-lmvbc" Mar 17 18:43:15.876506 kubelet[1899]: I0317 18:43:15.876247 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-lib-modules\") pod \"cilium-lmvbc\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " pod="kube-system/cilium-lmvbc" Mar 17 18:43:15.876506 kubelet[1899]: I0317 18:43:15.876280 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-cilium-cgroup\") pod \"cilium-lmvbc\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " pod="kube-system/cilium-lmvbc" Mar 17 18:43:15.876506 kubelet[1899]: I0317 18:43:15.876300 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-bpf-maps\") pod \"cilium-lmvbc\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " pod="kube-system/cilium-lmvbc" Mar 17 18:43:15.876506 kubelet[1899]: I0317 18:43:15.876366 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-cni-path\") pod \"cilium-lmvbc\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " pod="kube-system/cilium-lmvbc" Mar 17 18:43:15.876506 kubelet[1899]: I0317 18:43:15.876395 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-cilium-config-path\") pod \"cilium-lmvbc\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " pod="kube-system/cilium-lmvbc" Mar 17 18:43:15.876692 kubelet[1899]: I0317 18:43:15.876409 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-cilium-ipsec-secrets\") pod \"cilium-lmvbc\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " pod="kube-system/cilium-lmvbc" Mar 17 18:43:15.876692 kubelet[1899]: I0317 18:43:15.876426 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-hostproc\") pod \"cilium-lmvbc\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " pod="kube-system/cilium-lmvbc" Mar 17 18:43:16.113468 kubelet[1899]: I0317 18:43:16.113422 1899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06bff352-5055-4d90-aed6-7b0b9753ac82" path="/var/lib/kubelet/pods/06bff352-5055-4d90-aed6-7b0b9753ac82/volumes" Mar 17 18:43:16.113796 kubelet[1899]: I0317 18:43:16.113770 1899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f557670-6ff9-4cba-8ec5-9ea555a65a13" path="/var/lib/kubelet/pods/7f557670-6ff9-4cba-8ec5-9ea555a65a13/volumes" Mar 17 18:43:16.263845 sshd[3700]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:16.267532 systemd[1]: sshd@24-10.0.0.88:22-10.0.0.1:36190.service: Deactivated successfully. Mar 17 18:43:16.268078 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:43:16.270602 systemd[1]: Started sshd@25-10.0.0.88:22-10.0.0.1:36194.service. Mar 17 18:43:16.271263 systemd-logind[1193]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:43:16.272113 systemd-logind[1193]: Removed session 25. Mar 17 18:43:16.298938 sshd[3718]: Accepted publickey for core from 10.0.0.1 port 36194 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:43:16.300285 sshd[3718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:16.303470 systemd-logind[1193]: New session 26 of user core. Mar 17 18:43:16.304269 systemd[1]: Started session-26.scope. Mar 17 18:43:16.330219 kubelet[1899]: E0317 18:43:16.330184 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:16.330752 env[1208]: time="2025-03-17T18:43:16.330708973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmvbc,Uid:25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0,Namespace:kube-system,Attempt:0,}" Mar 17 18:43:16.514728 env[1208]: time="2025-03-17T18:43:16.514592282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:16.514969 env[1208]: time="2025-03-17T18:43:16.514881628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:16.514969 env[1208]: time="2025-03-17T18:43:16.514898010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:16.515209 env[1208]: time="2025-03-17T18:43:16.515169260Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e3710950e2386ce0f6344f5596361685235b1a51556bef3c68e6b9694941299 pid=3734 runtime=io.containerd.runc.v2 Mar 17 18:43:16.529607 systemd[1]: Started cri-containerd-6e3710950e2386ce0f6344f5596361685235b1a51556bef3c68e6b9694941299.scope. Mar 17 18:43:16.553969 env[1208]: time="2025-03-17T18:43:16.553925967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmvbc,Uid:25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e3710950e2386ce0f6344f5596361685235b1a51556bef3c68e6b9694941299\"" Mar 17 18:43:16.554404 kubelet[1899]: E0317 18:43:16.554383 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:16.557424 env[1208]: time="2025-03-17T18:43:16.557387486Z" level=info msg="CreateContainer within sandbox \"6e3710950e2386ce0f6344f5596361685235b1a51556bef3c68e6b9694941299\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:43:16.570801 env[1208]: time="2025-03-17T18:43:16.570760076Z" level=info msg="CreateContainer within sandbox \"6e3710950e2386ce0f6344f5596361685235b1a51556bef3c68e6b9694941299\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5b1ad985674e16a52e284e7953bb71a0ff644d23f6cd8be566b1266f1d195505\"" Mar 17 18:43:16.584495 env[1208]: time="2025-03-17T18:43:16.584438514Z" level=info msg="StartContainer for \"5b1ad985674e16a52e284e7953bb71a0ff644d23f6cd8be566b1266f1d195505\"" Mar 17 18:43:16.610851 systemd[1]: Started cri-containerd-5b1ad985674e16a52e284e7953bb71a0ff644d23f6cd8be566b1266f1d195505.scope. Mar 17 18:43:16.621785 systemd[1]: cri-containerd-5b1ad985674e16a52e284e7953bb71a0ff644d23f6cd8be566b1266f1d195505.scope: Deactivated successfully. Mar 17 18:43:16.622227 systemd[1]: Stopped cri-containerd-5b1ad985674e16a52e284e7953bb71a0ff644d23f6cd8be566b1266f1d195505.scope. Mar 17 18:43:16.641972 env[1208]: time="2025-03-17T18:43:16.641908530Z" level=info msg="shim disconnected" id=5b1ad985674e16a52e284e7953bb71a0ff644d23f6cd8be566b1266f1d195505 Mar 17 18:43:16.641972 env[1208]: time="2025-03-17T18:43:16.641961612Z" level=warning msg="cleaning up after shim disconnected" id=5b1ad985674e16a52e284e7953bb71a0ff644d23f6cd8be566b1266f1d195505 namespace=k8s.io Mar 17 18:43:16.641972 env[1208]: time="2025-03-17T18:43:16.641970369Z" level=info msg="cleaning up dead shim" Mar 17 18:43:16.649949 env[1208]: time="2025-03-17T18:43:16.649890534Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3792 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:43:16Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5b1ad985674e16a52e284e7953bb71a0ff644d23f6cd8be566b1266f1d195505/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 18:43:16.650288 env[1208]: time="2025-03-17T18:43:16.650177024Z" level=error msg="copy shim log" error="read /proc/self/fd/26: file already closed" Mar 17 18:43:16.650988 env[1208]: time="2025-03-17T18:43:16.650938918Z" level=error msg="Failed to pipe stdout of container \"5b1ad985674e16a52e284e7953bb71a0ff644d23f6cd8be566b1266f1d195505\"" error="reading from a closed fifo" Mar 17 18:43:16.651298 env[1208]: time="2025-03-17T18:43:16.651246259Z" level=error msg="Failed to pipe stderr of container \"5b1ad985674e16a52e284e7953bb71a0ff644d23f6cd8be566b1266f1d195505\"" error="reading from a closed fifo" Mar 17 18:43:16.654562 env[1208]: time="2025-03-17T18:43:16.654505089Z" level=error msg="StartContainer for \"5b1ad985674e16a52e284e7953bb71a0ff644d23f6cd8be566b1266f1d195505\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 18:43:16.654825 kubelet[1899]: E0317 18:43:16.654773 1899 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5b1ad985674e16a52e284e7953bb71a0ff644d23f6cd8be566b1266f1d195505" Mar 17 18:43:16.655008 kubelet[1899]: E0317 18:43:16.654978 1899 kuberuntime_manager.go:1341] "Unhandled Error" err=< Mar 17 18:43:16.655008 kubelet[1899]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 18:43:16.655008 kubelet[1899]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 18:43:16.655008 kubelet[1899]: rm /hostbin/cilium-mount Mar 17 18:43:16.655162 kubelet[1899]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-br7gv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-lmvbc_kube-system(25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 18:43:16.655162 kubelet[1899]: > logger="UnhandledError" Mar 17 18:43:16.656204 kubelet[1899]: E0317 18:43:16.656131 1899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lmvbc" podUID="25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0" Mar 17 18:43:17.150239 kubelet[1899]: E0317 18:43:17.150204 1899 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:43:17.287106 env[1208]: time="2025-03-17T18:43:17.287065681Z" level=info msg="StopPodSandbox for \"6e3710950e2386ce0f6344f5596361685235b1a51556bef3c68e6b9694941299\"" Mar 17 18:43:17.287245 env[1208]: time="2025-03-17T18:43:17.287131898Z" level=info msg="Container to stop \"5b1ad985674e16a52e284e7953bb71a0ff644d23f6cd8be566b1266f1d195505\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:43:17.289516 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e3710950e2386ce0f6344f5596361685235b1a51556bef3c68e6b9694941299-shm.mount: Deactivated successfully. Mar 17 18:43:17.295008 systemd[1]: cri-containerd-6e3710950e2386ce0f6344f5596361685235b1a51556bef3c68e6b9694941299.scope: Deactivated successfully. Mar 17 18:43:17.310173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e3710950e2386ce0f6344f5596361685235b1a51556bef3c68e6b9694941299-rootfs.mount: Deactivated successfully. Mar 17 18:43:17.314104 env[1208]: time="2025-03-17T18:43:17.314054097Z" level=info msg="shim disconnected" id=6e3710950e2386ce0f6344f5596361685235b1a51556bef3c68e6b9694941299 Mar 17 18:43:17.314104 env[1208]: time="2025-03-17T18:43:17.314095847Z" level=warning msg="cleaning up after shim disconnected" id=6e3710950e2386ce0f6344f5596361685235b1a51556bef3c68e6b9694941299 namespace=k8s.io Mar 17 18:43:17.314104 env[1208]: time="2025-03-17T18:43:17.314104785Z" level=info msg="cleaning up dead shim" Mar 17 18:43:17.320376 env[1208]: time="2025-03-17T18:43:17.320324315Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3823 runtime=io.containerd.runc.v2\n" Mar 17 18:43:17.320607 env[1208]: time="2025-03-17T18:43:17.320576911Z" level=info msg="TearDown network for sandbox \"6e3710950e2386ce0f6344f5596361685235b1a51556bef3c68e6b9694941299\" successfully" Mar 17 18:43:17.320607 env[1208]: time="2025-03-17T18:43:17.320599444Z" level=info msg="StopPodSandbox for \"6e3710950e2386ce0f6344f5596361685235b1a51556bef3c68e6b9694941299\" returns successfully" Mar 17 18:43:17.385466 kubelet[1899]: I0317 18:43:17.385426 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-lib-modules\") pod \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " Mar 17 18:43:17.385466 kubelet[1899]: I0317 18:43:17.385460 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-bpf-maps\") pod \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " Mar 17 18:43:17.385466 kubelet[1899]: I0317 18:43:17.385472 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-cni-path\") pod \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " Mar 17 18:43:17.385466 kubelet[1899]: I0317 18:43:17.385485 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-cilium-run\") pod \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " Mar 17 18:43:17.385767 kubelet[1899]: I0317 18:43:17.385500 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-host-proc-sys-kernel\") pod \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " Mar 17 18:43:17.385767 kubelet[1899]: I0317 18:43:17.385520 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-xtables-lock\") pod \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " Mar 17 18:43:17.385767 kubelet[1899]: I0317 18:43:17.385540 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-cilium-config-path\") pod \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " Mar 17 18:43:17.385767 kubelet[1899]: I0317 18:43:17.385549 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0" (UID: "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:43:17.385767 kubelet[1899]: I0317 18:43:17.385549 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-cni-path" (OuterVolumeSpecName: "cni-path") pod "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0" (UID: "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:43:17.385767 kubelet[1899]: I0317 18:43:17.385556 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-host-proc-sys-net\") pod \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " Mar 17 18:43:17.385767 kubelet[1899]: I0317 18:43:17.385571 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0" (UID: "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:43:17.385767 kubelet[1899]: I0317 18:43:17.385596 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-clustermesh-secrets\") pod \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " Mar 17 18:43:17.385767 kubelet[1899]: I0317 18:43:17.385615 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-hostproc\") pod \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " Mar 17 18:43:17.385767 kubelet[1899]: I0317 18:43:17.385632 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-hubble-tls\") pod \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " Mar 17 18:43:17.385767 kubelet[1899]: I0317 18:43:17.385647 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-etc-cni-netd\") pod \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " Mar 17 18:43:17.385767 kubelet[1899]: I0317 18:43:17.385659 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-cilium-cgroup\") pod \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " Mar 17 18:43:17.385767 kubelet[1899]: I0317 18:43:17.385673 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-cilium-ipsec-secrets\") pod \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " Mar 17 18:43:17.385767 kubelet[1899]: I0317 18:43:17.385689 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br7gv\" (UniqueName: \"kubernetes.io/projected/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-kube-api-access-br7gv\") pod \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\" (UID: \"25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0\") " Mar 17 18:43:17.385767 kubelet[1899]: I0317 18:43:17.385721 1899 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.385767 kubelet[1899]: I0317 18:43:17.385728 1899 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.386173 kubelet[1899]: I0317 18:43:17.385736 1899 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.386173 kubelet[1899]: I0317 18:43:17.385573 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0" (UID: "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:43:17.386230 kubelet[1899]: I0317 18:43:17.385583 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0" (UID: "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:43:17.386230 kubelet[1899]: I0317 18:43:17.385582 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0" (UID: "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:43:17.386230 kubelet[1899]: I0317 18:43:17.385853 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0" (UID: "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:43:17.386230 kubelet[1899]: I0317 18:43:17.386201 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0" (UID: "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:43:17.386230 kubelet[1899]: I0317 18:43:17.386215 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0" (UID: "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:43:17.386349 kubelet[1899]: I0317 18:43:17.386226 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-hostproc" (OuterVolumeSpecName: "hostproc") pod "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0" (UID: "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:43:17.387436 kubelet[1899]: I0317 18:43:17.387406 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0" (UID: "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 18:43:17.388201 kubelet[1899]: I0317 18:43:17.388174 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-kube-api-access-br7gv" (OuterVolumeSpecName: "kube-api-access-br7gv") pod "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0" (UID: "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0"). InnerVolumeSpecName "kube-api-access-br7gv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 18:43:17.389294 kubelet[1899]: I0317 18:43:17.389242 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0" (UID: "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 18:43:17.392237 kubelet[1899]: I0317 18:43:17.390228 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0" (UID: "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 18:43:17.392237 kubelet[1899]: I0317 18:43:17.390247 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0" (UID: "25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 18:43:17.390409 systemd[1]: var-lib-kubelet-pods-25cdec4b\x2d3ca1\x2d407a\x2da8ea\x2da8a5a5df2de0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbr7gv.mount: Deactivated successfully. Mar 17 18:43:17.392248 systemd[1]: var-lib-kubelet-pods-25cdec4b\x2d3ca1\x2d407a\x2da8ea\x2da8a5a5df2de0-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:43:17.392316 systemd[1]: var-lib-kubelet-pods-25cdec4b\x2d3ca1\x2d407a\x2da8ea\x2da8a5a5df2de0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:43:17.392377 systemd[1]: var-lib-kubelet-pods-25cdec4b\x2d3ca1\x2d407a\x2da8ea\x2da8a5a5df2de0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:43:17.486839 kubelet[1899]: I0317 18:43:17.486690 1899 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.486839 kubelet[1899]: I0317 18:43:17.486727 1899 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.486839 kubelet[1899]: I0317 18:43:17.486750 1899 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.486839 kubelet[1899]: I0317 18:43:17.486765 1899 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.486839 kubelet[1899]: I0317 18:43:17.486776 1899 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.486839 kubelet[1899]: I0317 18:43:17.486785 1899 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.486839 kubelet[1899]: I0317 18:43:17.486794 1899 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.486839 kubelet[1899]: I0317 18:43:17.486803 1899 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.486839 kubelet[1899]: I0317 18:43:17.486812 1899 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.486839 kubelet[1899]: I0317 18:43:17.486821 1899 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.486839 kubelet[1899]: I0317 18:43:17.486832 1899 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.486839 kubelet[1899]: I0317 18:43:17.486842 1899 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-br7gv\" (UniqueName: \"kubernetes.io/projected/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0-kube-api-access-br7gv\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:18.117628 systemd[1]: Removed slice kubepods-burstable-pod25cdec4b_3ca1_407a_a8ea_a8a5a5df2de0.slice. Mar 17 18:43:18.289440 kubelet[1899]: I0317 18:43:18.289399 1899 scope.go:117] "RemoveContainer" containerID="5b1ad985674e16a52e284e7953bb71a0ff644d23f6cd8be566b1266f1d195505" Mar 17 18:43:18.290615 env[1208]: time="2025-03-17T18:43:18.290573775Z" level=info msg="RemoveContainer for \"5b1ad985674e16a52e284e7953bb71a0ff644d23f6cd8be566b1266f1d195505\"" Mar 17 18:43:18.294708 env[1208]: time="2025-03-17T18:43:18.294646490Z" level=info msg="RemoveContainer for \"5b1ad985674e16a52e284e7953bb71a0ff644d23f6cd8be566b1266f1d195505\" returns successfully" Mar 17 18:43:18.319570 kubelet[1899]: I0317 18:43:18.319513 1899 memory_manager.go:355] "RemoveStaleState removing state" podUID="25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0" containerName="mount-cgroup" Mar 17 18:43:18.328709 systemd[1]: Created slice kubepods-burstable-pod13c19b42_767a_4909_8fa8_9f60c1ffae38.slice. Mar 17 18:43:18.393647 kubelet[1899]: I0317 18:43:18.393469 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/13c19b42-767a-4909-8fa8-9f60c1ffae38-clustermesh-secrets\") pod \"cilium-dnk8s\" (UID: \"13c19b42-767a-4909-8fa8-9f60c1ffae38\") " pod="kube-system/cilium-dnk8s" Mar 17 18:43:18.393647 kubelet[1899]: I0317 18:43:18.393533 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/13c19b42-767a-4909-8fa8-9f60c1ffae38-bpf-maps\") pod \"cilium-dnk8s\" (UID: \"13c19b42-767a-4909-8fa8-9f60c1ffae38\") " pod="kube-system/cilium-dnk8s" Mar 17 18:43:18.393647 kubelet[1899]: I0317 18:43:18.393556 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13c19b42-767a-4909-8fa8-9f60c1ffae38-lib-modules\") pod \"cilium-dnk8s\" (UID: \"13c19b42-767a-4909-8fa8-9f60c1ffae38\") " pod="kube-system/cilium-dnk8s" Mar 17 18:43:18.393647 kubelet[1899]: I0317 18:43:18.393579 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/13c19b42-767a-4909-8fa8-9f60c1ffae38-hostproc\") pod \"cilium-dnk8s\" (UID: \"13c19b42-767a-4909-8fa8-9f60c1ffae38\") " pod="kube-system/cilium-dnk8s" Mar 17 18:43:18.393647 kubelet[1899]: I0317 18:43:18.393598 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13c19b42-767a-4909-8fa8-9f60c1ffae38-cilium-config-path\") pod \"cilium-dnk8s\" (UID: \"13c19b42-767a-4909-8fa8-9f60c1ffae38\") " pod="kube-system/cilium-dnk8s" Mar 17 18:43:18.393995 kubelet[1899]: I0317 18:43:18.393681 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/13c19b42-767a-4909-8fa8-9f60c1ffae38-etc-cni-netd\") pod \"cilium-dnk8s\" (UID: \"13c19b42-767a-4909-8fa8-9f60c1ffae38\") " pod="kube-system/cilium-dnk8s" Mar 17 18:43:18.393995 kubelet[1899]: I0317 18:43:18.393745 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/13c19b42-767a-4909-8fa8-9f60c1ffae38-cilium-ipsec-secrets\") pod \"cilium-dnk8s\" (UID: \"13c19b42-767a-4909-8fa8-9f60c1ffae38\") " pod="kube-system/cilium-dnk8s" Mar 17 18:43:18.393995 kubelet[1899]: I0317 18:43:18.393779 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/13c19b42-767a-4909-8fa8-9f60c1ffae38-host-proc-sys-kernel\") pod \"cilium-dnk8s\" (UID: \"13c19b42-767a-4909-8fa8-9f60c1ffae38\") " pod="kube-system/cilium-dnk8s" Mar 17 18:43:18.393995 kubelet[1899]: I0317 18:43:18.393807 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktmbq\" (UniqueName: \"kubernetes.io/projected/13c19b42-767a-4909-8fa8-9f60c1ffae38-kube-api-access-ktmbq\") pod \"cilium-dnk8s\" (UID: \"13c19b42-767a-4909-8fa8-9f60c1ffae38\") " pod="kube-system/cilium-dnk8s" Mar 17 18:43:18.393995 kubelet[1899]: I0317 18:43:18.393839 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/13c19b42-767a-4909-8fa8-9f60c1ffae38-cilium-cgroup\") pod \"cilium-dnk8s\" (UID: \"13c19b42-767a-4909-8fa8-9f60c1ffae38\") " pod="kube-system/cilium-dnk8s" Mar 17 18:43:18.393995 kubelet[1899]: I0317 18:43:18.393857 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13c19b42-767a-4909-8fa8-9f60c1ffae38-xtables-lock\") pod \"cilium-dnk8s\" (UID: \"13c19b42-767a-4909-8fa8-9f60c1ffae38\") " pod="kube-system/cilium-dnk8s" Mar 17 18:43:18.393995 kubelet[1899]: I0317 18:43:18.393876 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/13c19b42-767a-4909-8fa8-9f60c1ffae38-hubble-tls\") pod \"cilium-dnk8s\" (UID: \"13c19b42-767a-4909-8fa8-9f60c1ffae38\") " pod="kube-system/cilium-dnk8s" Mar 17 18:43:18.393995 kubelet[1899]: I0317 18:43:18.393895 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/13c19b42-767a-4909-8fa8-9f60c1ffae38-cilium-run\") pod \"cilium-dnk8s\" (UID: \"13c19b42-767a-4909-8fa8-9f60c1ffae38\") " pod="kube-system/cilium-dnk8s" Mar 17 18:43:18.393995 kubelet[1899]: I0317 18:43:18.393920 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/13c19b42-767a-4909-8fa8-9f60c1ffae38-host-proc-sys-net\") pod \"cilium-dnk8s\" (UID: \"13c19b42-767a-4909-8fa8-9f60c1ffae38\") " pod="kube-system/cilium-dnk8s" Mar 17 18:43:18.393995 kubelet[1899]: I0317 18:43:18.393946 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/13c19b42-767a-4909-8fa8-9f60c1ffae38-cni-path\") pod \"cilium-dnk8s\" (UID: \"13c19b42-767a-4909-8fa8-9f60c1ffae38\") " pod="kube-system/cilium-dnk8s" Mar 17 18:43:18.631118 kubelet[1899]: E0317 18:43:18.631052 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:18.631670 env[1208]: time="2025-03-17T18:43:18.631620387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dnk8s,Uid:13c19b42-767a-4909-8fa8-9f60c1ffae38,Namespace:kube-system,Attempt:0,}" Mar 17 18:43:18.712307 env[1208]: time="2025-03-17T18:43:18.712109777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:18.712307 env[1208]: time="2025-03-17T18:43:18.712178599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:18.712307 env[1208]: time="2025-03-17T18:43:18.712200501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:18.712689 env[1208]: time="2025-03-17T18:43:18.712601020Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/67e10e83e85dd5acccbb637853f719eac674bf81fc0a6e34bceda5598e514d58 pid=3851 runtime=io.containerd.runc.v2 Mar 17 18:43:18.725651 systemd[1]: Started cri-containerd-67e10e83e85dd5acccbb637853f719eac674bf81fc0a6e34bceda5598e514d58.scope. Mar 17 18:43:18.752919 env[1208]: time="2025-03-17T18:43:18.752866179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dnk8s,Uid:13c19b42-767a-4909-8fa8-9f60c1ffae38,Namespace:kube-system,Attempt:0,} returns sandbox id \"67e10e83e85dd5acccbb637853f719eac674bf81fc0a6e34bceda5598e514d58\"" Mar 17 18:43:18.753636 kubelet[1899]: E0317 18:43:18.753611 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:18.755961 env[1208]: time="2025-03-17T18:43:18.755935930Z" level=info msg="CreateContainer within sandbox \"67e10e83e85dd5acccbb637853f719eac674bf81fc0a6e34bceda5598e514d58\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:43:18.768819 env[1208]: time="2025-03-17T18:43:18.768770175Z" level=info msg="CreateContainer within sandbox \"67e10e83e85dd5acccbb637853f719eac674bf81fc0a6e34bceda5598e514d58\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"448c6e05479e223c34a1ece1947ed0be1a10b498c404896ae96a978e1e06d4c4\"" Mar 17 18:43:18.769507 env[1208]: time="2025-03-17T18:43:18.769471571Z" level=info msg="StartContainer for \"448c6e05479e223c34a1ece1947ed0be1a10b498c404896ae96a978e1e06d4c4\"" Mar 17 18:43:18.784713 systemd[1]: Started cri-containerd-448c6e05479e223c34a1ece1947ed0be1a10b498c404896ae96a978e1e06d4c4.scope. Mar 17 18:43:18.814646 env[1208]: time="2025-03-17T18:43:18.814570285Z" level=info msg="StartContainer for \"448c6e05479e223c34a1ece1947ed0be1a10b498c404896ae96a978e1e06d4c4\" returns successfully" Mar 17 18:43:18.820850 systemd[1]: cri-containerd-448c6e05479e223c34a1ece1947ed0be1a10b498c404896ae96a978e1e06d4c4.scope: Deactivated successfully. Mar 17 18:43:18.912745 env[1208]: time="2025-03-17T18:43:18.912685105Z" level=info msg="shim disconnected" id=448c6e05479e223c34a1ece1947ed0be1a10b498c404896ae96a978e1e06d4c4 Mar 17 18:43:18.912745 env[1208]: time="2025-03-17T18:43:18.912728960Z" level=warning msg="cleaning up after shim disconnected" id=448c6e05479e223c34a1ece1947ed0be1a10b498c404896ae96a978e1e06d4c4 namespace=k8s.io Mar 17 18:43:18.912745 env[1208]: time="2025-03-17T18:43:18.912737426Z" level=info msg="cleaning up dead shim" Mar 17 18:43:18.919542 env[1208]: time="2025-03-17T18:43:18.919480825Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3937 runtime=io.containerd.runc.v2\n" Mar 17 18:43:19.291760 kubelet[1899]: E0317 18:43:19.291723 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:19.293193 env[1208]: time="2025-03-17T18:43:19.293125555Z" level=info msg="CreateContainer within sandbox \"67e10e83e85dd5acccbb637853f719eac674bf81fc0a6e34bceda5598e514d58\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:43:19.305977 env[1208]: time="2025-03-17T18:43:19.305787536Z" level=info msg="CreateContainer within sandbox \"67e10e83e85dd5acccbb637853f719eac674bf81fc0a6e34bceda5598e514d58\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5bf6ed69b5e27e30a9b0d66cd18957291d217e01ba6786231dc8213c97c8abb8\"" Mar 17 18:43:19.306717 env[1208]: time="2025-03-17T18:43:19.306685719Z" level=info msg="StartContainer for \"5bf6ed69b5e27e30a9b0d66cd18957291d217e01ba6786231dc8213c97c8abb8\"" Mar 17 18:43:19.320446 systemd[1]: Started cri-containerd-5bf6ed69b5e27e30a9b0d66cd18957291d217e01ba6786231dc8213c97c8abb8.scope. Mar 17 18:43:19.342061 env[1208]: time="2025-03-17T18:43:19.342021240Z" level=info msg="StartContainer for \"5bf6ed69b5e27e30a9b0d66cd18957291d217e01ba6786231dc8213c97c8abb8\" returns successfully" Mar 17 18:43:19.346177 systemd[1]: cri-containerd-5bf6ed69b5e27e30a9b0d66cd18957291d217e01ba6786231dc8213c97c8abb8.scope: Deactivated successfully. Mar 17 18:43:19.365163 env[1208]: time="2025-03-17T18:43:19.365098335Z" level=info msg="shim disconnected" id=5bf6ed69b5e27e30a9b0d66cd18957291d217e01ba6786231dc8213c97c8abb8 Mar 17 18:43:19.365163 env[1208]: time="2025-03-17T18:43:19.365153270Z" level=warning msg="cleaning up after shim disconnected" id=5bf6ed69b5e27e30a9b0d66cd18957291d217e01ba6786231dc8213c97c8abb8 namespace=k8s.io Mar 17 18:43:19.365379 env[1208]: time="2025-03-17T18:43:19.365173819Z" level=info msg="cleaning up dead shim" Mar 17 18:43:19.371202 env[1208]: time="2025-03-17T18:43:19.371172923Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3997 runtime=io.containerd.runc.v2\n" Mar 17 18:43:19.747480 kubelet[1899]: W0317 18:43:19.747415 1899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25cdec4b_3ca1_407a_a8ea_a8a5a5df2de0.slice/cri-containerd-5b1ad985674e16a52e284e7953bb71a0ff644d23f6cd8be566b1266f1d195505.scope WatchSource:0}: container "5b1ad985674e16a52e284e7953bb71a0ff644d23f6cd8be566b1266f1d195505" in namespace "k8s.io": not found Mar 17 18:43:20.112516 kubelet[1899]: E0317 18:43:20.112290 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:20.112516 kubelet[1899]: E0317 18:43:20.112413 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:20.114105 kubelet[1899]: I0317 18:43:20.114061 1899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0" path="/var/lib/kubelet/pods/25cdec4b-3ca1-407a-a8ea-a8a5a5df2de0/volumes" Mar 17 18:43:20.296028 kubelet[1899]: E0317 18:43:20.295990 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:20.297784 env[1208]: time="2025-03-17T18:43:20.297693608Z" level=info msg="CreateContainer within sandbox \"67e10e83e85dd5acccbb637853f719eac674bf81fc0a6e34bceda5598e514d58\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:43:20.313447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3691041855.mount: Deactivated successfully. Mar 17 18:43:20.320986 env[1208]: time="2025-03-17T18:43:20.320922066Z" level=info msg="CreateContainer within sandbox \"67e10e83e85dd5acccbb637853f719eac674bf81fc0a6e34bceda5598e514d58\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"be7fdbe6d6b33cd366b49687cec7054a338819b512dd0e032ed39b1b4a28da19\"" Mar 17 18:43:20.321469 env[1208]: time="2025-03-17T18:43:20.321443846Z" level=info msg="StartContainer for \"be7fdbe6d6b33cd366b49687cec7054a338819b512dd0e032ed39b1b4a28da19\"" Mar 17 18:43:20.337500 systemd[1]: Started cri-containerd-be7fdbe6d6b33cd366b49687cec7054a338819b512dd0e032ed39b1b4a28da19.scope. Mar 17 18:43:20.366969 env[1208]: time="2025-03-17T18:43:20.366857535Z" level=info msg="StartContainer for \"be7fdbe6d6b33cd366b49687cec7054a338819b512dd0e032ed39b1b4a28da19\" returns successfully" Mar 17 18:43:20.368111 systemd[1]: cri-containerd-be7fdbe6d6b33cd366b49687cec7054a338819b512dd0e032ed39b1b4a28da19.scope: Deactivated successfully. Mar 17 18:43:20.389380 env[1208]: time="2025-03-17T18:43:20.389334484Z" level=info msg="shim disconnected" id=be7fdbe6d6b33cd366b49687cec7054a338819b512dd0e032ed39b1b4a28da19 Mar 17 18:43:20.389594 env[1208]: time="2025-03-17T18:43:20.389380132Z" level=warning msg="cleaning up after shim disconnected" id=be7fdbe6d6b33cd366b49687cec7054a338819b512dd0e032ed39b1b4a28da19 namespace=k8s.io Mar 17 18:43:20.389594 env[1208]: time="2025-03-17T18:43:20.389388608Z" level=info msg="cleaning up dead shim" Mar 17 18:43:20.395565 env[1208]: time="2025-03-17T18:43:20.395521732Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4052 runtime=io.containerd.runc.v2\n" Mar 17 18:43:20.500259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be7fdbe6d6b33cd366b49687cec7054a338819b512dd0e032ed39b1b4a28da19-rootfs.mount: Deactivated successfully. Mar 17 18:43:21.299394 kubelet[1899]: E0317 18:43:21.299127 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:21.301136 env[1208]: time="2025-03-17T18:43:21.301091504Z" level=info msg="CreateContainer within sandbox \"67e10e83e85dd5acccbb637853f719eac674bf81fc0a6e34bceda5598e514d58\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:43:21.313267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount219112838.mount: Deactivated successfully. Mar 17 18:43:21.315708 env[1208]: time="2025-03-17T18:43:21.315652496Z" level=info msg="CreateContainer within sandbox \"67e10e83e85dd5acccbb637853f719eac674bf81fc0a6e34bceda5598e514d58\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ba76951e0bdc14b3561fee69923fe76f2b6c22583f88703261f003432e390fd5\"" Mar 17 18:43:21.316336 env[1208]: time="2025-03-17T18:43:21.316295278Z" level=info msg="StartContainer for \"ba76951e0bdc14b3561fee69923fe76f2b6c22583f88703261f003432e390fd5\"" Mar 17 18:43:21.367708 systemd[1]: Started cri-containerd-ba76951e0bdc14b3561fee69923fe76f2b6c22583f88703261f003432e390fd5.scope. Mar 17 18:43:21.386903 systemd[1]: cri-containerd-ba76951e0bdc14b3561fee69923fe76f2b6c22583f88703261f003432e390fd5.scope: Deactivated successfully. Mar 17 18:43:21.461453 env[1208]: time="2025-03-17T18:43:21.461407776Z" level=info msg="StartContainer for \"ba76951e0bdc14b3561fee69923fe76f2b6c22583f88703261f003432e390fd5\" returns successfully" Mar 17 18:43:21.500540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba76951e0bdc14b3561fee69923fe76f2b6c22583f88703261f003432e390fd5-rootfs.mount: Deactivated successfully. Mar 17 18:43:21.536123 env[1208]: time="2025-03-17T18:43:21.536069454Z" level=info msg="shim disconnected" id=ba76951e0bdc14b3561fee69923fe76f2b6c22583f88703261f003432e390fd5 Mar 17 18:43:21.536123 env[1208]: time="2025-03-17T18:43:21.536110333Z" level=warning msg="cleaning up after shim disconnected" id=ba76951e0bdc14b3561fee69923fe76f2b6c22583f88703261f003432e390fd5 namespace=k8s.io Mar 17 18:43:21.536123 env[1208]: time="2025-03-17T18:43:21.536118558Z" level=info msg="cleaning up dead shim" Mar 17 18:43:21.542088 env[1208]: time="2025-03-17T18:43:21.542047526Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4105 runtime=io.containerd.runc.v2\n" Mar 17 18:43:22.151251 kubelet[1899]: E0317 18:43:22.151198 1899 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:43:22.302871 kubelet[1899]: E0317 18:43:22.302834 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:22.304959 env[1208]: time="2025-03-17T18:43:22.304906287Z" level=info msg="CreateContainer within sandbox \"67e10e83e85dd5acccbb637853f719eac674bf81fc0a6e34bceda5598e514d58\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:43:22.321368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4146088474.mount: Deactivated successfully. Mar 17 18:43:22.325747 env[1208]: time="2025-03-17T18:43:22.325706159Z" level=info msg="CreateContainer within sandbox \"67e10e83e85dd5acccbb637853f719eac674bf81fc0a6e34bceda5598e514d58\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0e70bd73e4071be0487716d77b586cafe67713e6accd9537a68c731b41183e4a\"" Mar 17 18:43:22.326167 env[1208]: time="2025-03-17T18:43:22.326132205Z" level=info msg="StartContainer for \"0e70bd73e4071be0487716d77b586cafe67713e6accd9537a68c731b41183e4a\"" Mar 17 18:43:22.339466 systemd[1]: Started cri-containerd-0e70bd73e4071be0487716d77b586cafe67713e6accd9537a68c731b41183e4a.scope. Mar 17 18:43:22.368175 env[1208]: time="2025-03-17T18:43:22.368119881Z" level=info msg="StartContainer for \"0e70bd73e4071be0487716d77b586cafe67713e6accd9537a68c731b41183e4a\" returns successfully" Mar 17 18:43:22.618168 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 18:43:22.858237 kubelet[1899]: W0317 18:43:22.858187 1899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13c19b42_767a_4909_8fa8_9f60c1ffae38.slice/cri-containerd-448c6e05479e223c34a1ece1947ed0be1a10b498c404896ae96a978e1e06d4c4.scope WatchSource:0}: task 448c6e05479e223c34a1ece1947ed0be1a10b498c404896ae96a978e1e06d4c4 not found: not found Mar 17 18:43:23.306972 kubelet[1899]: E0317 18:43:23.306933 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:23.467710 kubelet[1899]: I0317 18:43:23.467645 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dnk8s" podStartSLOduration=5.467629685 podStartE2EDuration="5.467629685s" podCreationTimestamp="2025-03-17 18:43:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:43:23.467114949 +0000 UTC m=+91.444006312" watchObservedRunningTime="2025-03-17 18:43:23.467629685 +0000 UTC m=+91.444521027" Mar 17 18:43:24.115770 kubelet[1899]: I0317 18:43:24.115706 1899 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:43:24Z","lastTransitionTime":"2025-03-17T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:43:24.632058 kubelet[1899]: E0317 18:43:24.632029 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:24.811660 systemd[1]: run-containerd-runc-k8s.io-0e70bd73e4071be0487716d77b586cafe67713e6accd9537a68c731b41183e4a-runc.KzDFpV.mount: Deactivated successfully. Mar 17 18:43:25.172753 systemd-networkd[1031]: lxc_health: Link UP Mar 17 18:43:25.188730 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:43:25.189333 systemd-networkd[1031]: lxc_health: Gained carrier Mar 17 18:43:25.963998 kubelet[1899]: W0317 18:43:25.963956 1899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13c19b42_767a_4909_8fa8_9f60c1ffae38.slice/cri-containerd-5bf6ed69b5e27e30a9b0d66cd18957291d217e01ba6786231dc8213c97c8abb8.scope WatchSource:0}: task 5bf6ed69b5e27e30a9b0d66cd18957291d217e01ba6786231dc8213c97c8abb8 not found: not found Mar 17 18:43:26.452292 systemd-networkd[1031]: lxc_health: Gained IPv6LL Mar 17 18:43:26.633017 kubelet[1899]: E0317 18:43:26.632977 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:27.112075 kubelet[1899]: E0317 18:43:27.112024 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:27.313067 kubelet[1899]: E0317 18:43:27.313017 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:28.314747 kubelet[1899]: E0317 18:43:28.314717 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:29.072006 kubelet[1899]: W0317 18:43:29.071966 1899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13c19b42_767a_4909_8fa8_9f60c1ffae38.slice/cri-containerd-be7fdbe6d6b33cd366b49687cec7054a338819b512dd0e032ed39b1b4a28da19.scope WatchSource:0}: task be7fdbe6d6b33cd366b49687cec7054a338819b512dd0e032ed39b1b4a28da19 not found: not found Mar 17 18:43:31.117324 sshd[3718]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:31.119687 systemd[1]: sshd@25-10.0.0.88:22-10.0.0.1:36194.service: Deactivated successfully. Mar 17 18:43:31.120527 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 18:43:31.121222 systemd-logind[1193]: Session 26 logged out. Waiting for processes to exit. Mar 17 18:43:31.122020 systemd-logind[1193]: Removed session 26. Mar 17 18:43:32.112384 kubelet[1899]: E0317 18:43:32.112341 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:32.180843 kubelet[1899]: W0317 18:43:32.180800 1899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13c19b42_767a_4909_8fa8_9f60c1ffae38.slice/cri-containerd-ba76951e0bdc14b3561fee69923fe76f2b6c22583f88703261f003432e390fd5.scope WatchSource:0}: task ba76951e0bdc14b3561fee69923fe76f2b6c22583f88703261f003432e390fd5 not found: not found