Dec 13 01:57:02.887645 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 01:57:02.887662 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:57:02.887672 kernel: BIOS-provided physical RAM map: Dec 13 01:57:02.887678 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:57:02.887683 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 01:57:02.887688 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 01:57:02.887695 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 01:57:02.887701 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 01:57:02.887706 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 01:57:02.887713 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 01:57:02.887718 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Dec 13 01:57:02.887723 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Dec 13 01:57:02.887729 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 01:57:02.887735 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 01:57:02.887742 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 01:57:02.887749 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 01:57:02.887754 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 01:57:02.887760 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:57:02.887766 kernel: NX (Execute Disable) protection: active Dec 13 01:57:02.887772 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Dec 13 01:57:02.887778 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Dec 13 01:57:02.887783 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Dec 13 01:57:02.887789 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Dec 13 01:57:02.887795 kernel: extended physical RAM map: Dec 13 01:57:02.887800 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:57:02.887808 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 01:57:02.887814 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 01:57:02.887820 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 01:57:02.887826 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 01:57:02.887832 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 01:57:02.887837 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 01:57:02.887843 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Dec 13 01:57:02.887849 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Dec 13 01:57:02.887855 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Dec 13 01:57:02.887860 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Dec 13 01:57:02.887866 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Dec 13 01:57:02.887873 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Dec 13 01:57:02.887879 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 01:57:02.887885 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 01:57:02.887891 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 01:57:02.887899 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 01:57:02.887905 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 01:57:02.887912 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:57:02.887919 kernel: efi: EFI v2.70 by EDK II Dec 13 01:57:02.887925 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Dec 13 01:57:02.887932 kernel: random: crng init done Dec 13 01:57:02.887938 kernel: SMBIOS 2.8 present. Dec 13 01:57:02.887944 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Dec 13 01:57:02.887951 kernel: Hypervisor detected: KVM Dec 13 01:57:02.887957 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:57:02.887963 kernel: kvm-clock: cpu 0, msr 6519b001, primary cpu clock Dec 13 01:57:02.887970 kernel: kvm-clock: using sched offset of 4277506709 cycles Dec 13 01:57:02.887978 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:57:02.887985 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:57:02.887992 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:57:02.887998 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:57:02.888005 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Dec 13 01:57:02.888011 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:57:02.888018 kernel: Using GB pages for direct mapping Dec 13 01:57:02.888024 kernel: Secure boot disabled Dec 13 01:57:02.888030 kernel: ACPI: Early table checksum verification disabled Dec 13 01:57:02.888038 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 13 01:57:02.888044 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:57:02.888051 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:02.888058 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:02.888064 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 13 01:57:02.888071 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:02.888077 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:02.888083 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:02.888090 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:02.888097 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 01:57:02.888104 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 13 01:57:02.888110 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Dec 13 01:57:02.888125 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 13 01:57:02.888132 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 13 01:57:02.888138 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 13 01:57:02.888144 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 13 01:57:02.888151 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 13 01:57:02.888172 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 13 01:57:02.888180 kernel: No NUMA configuration found Dec 13 01:57:02.888187 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Dec 13 01:57:02.888193 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Dec 13 01:57:02.888200 kernel: Zone ranges: Dec 13 01:57:02.888206 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:57:02.888212 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Dec 13 01:57:02.888219 kernel: Normal empty Dec 13 01:57:02.888225 kernel: Movable zone start for each node Dec 13 01:57:02.888231 kernel: Early memory node ranges Dec 13 01:57:02.888239 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 01:57:02.888245 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 13 01:57:02.888252 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 13 01:57:02.888258 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Dec 13 01:57:02.888264 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Dec 13 01:57:02.888271 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Dec 13 01:57:02.888277 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Dec 13 01:57:02.888283 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:57:02.888290 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 01:57:02.888296 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 13 01:57:02.888304 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:57:02.888310 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Dec 13 01:57:02.888317 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 13 01:57:02.888323 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Dec 13 01:57:02.888330 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:57:02.888336 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:57:02.888343 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:57:02.888349 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:57:02.888355 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:57:02.888363 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:57:02.888369 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:57:02.888376 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:57:02.888382 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:57:02.888389 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:57:02.888395 kernel: TSC deadline timer available Dec 13 01:57:02.888402 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:57:02.888408 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:57:02.888414 kernel: kvm-guest: setup PV sched yield Dec 13 01:57:02.888422 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:57:02.888429 kernel: Booting paravirtualized kernel on KVM Dec 13 01:57:02.888439 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:57:02.888447 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:57:02.888454 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Dec 13 01:57:02.888461 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Dec 13 01:57:02.888467 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:57:02.888474 kernel: kvm-guest: setup async PF for cpu 0 Dec 13 01:57:02.888481 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Dec 13 01:57:02.888488 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:57:02.888494 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:57:02.888501 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Dec 13 01:57:02.888509 kernel: Policy zone: DMA32 Dec 13 01:57:02.888517 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:57:02.888525 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:57:02.888532 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:57:02.888540 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:57:02.888547 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:57:02.888554 kernel: Memory: 2397432K/2567000K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 169308K reserved, 0K cma-reserved) Dec 13 01:57:02.888561 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:57:02.888568 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 01:57:02.888574 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 01:57:02.888581 kernel: rcu: Hierarchical RCU implementation. Dec 13 01:57:02.888588 kernel: rcu: RCU event tracing is enabled. Dec 13 01:57:02.888595 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:57:02.888604 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:57:02.888610 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:57:02.888617 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:57:02.888624 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:57:02.888631 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:57:02.888638 kernel: Console: colour dummy device 80x25 Dec 13 01:57:02.888645 kernel: printk: console [ttyS0] enabled Dec 13 01:57:02.888652 kernel: ACPI: Core revision 20210730 Dec 13 01:57:02.888659 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:57:02.888667 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:57:02.888673 kernel: x2apic enabled Dec 13 01:57:02.888680 kernel: Switched APIC routing to physical x2apic. Dec 13 01:57:02.888687 kernel: kvm-guest: setup PV IPIs Dec 13 01:57:02.888694 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:57:02.888701 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:57:02.888707 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:57:02.888714 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:57:02.888721 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:57:02.888729 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:57:02.888736 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:57:02.888743 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:57:02.888750 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:57:02.888756 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:57:02.888763 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:57:02.888770 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:57:02.888777 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:57:02.888784 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 01:57:02.888792 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:57:02.888799 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:57:02.888805 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:57:02.888812 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:57:02.888819 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 01:57:02.888826 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:57:02.888833 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:57:02.888840 kernel: LSM: Security Framework initializing Dec 13 01:57:02.888847 kernel: SELinux: Initializing. Dec 13 01:57:02.888856 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:57:02.888863 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:57:02.888870 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:57:02.888876 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:57:02.888883 kernel: ... version: 0 Dec 13 01:57:02.888890 kernel: ... bit width: 48 Dec 13 01:57:02.888896 kernel: ... generic registers: 6 Dec 13 01:57:02.888903 kernel: ... value mask: 0000ffffffffffff Dec 13 01:57:02.888910 kernel: ... max period: 00007fffffffffff Dec 13 01:57:02.888918 kernel: ... fixed-purpose events: 0 Dec 13 01:57:02.888925 kernel: ... event mask: 000000000000003f Dec 13 01:57:02.888931 kernel: signal: max sigframe size: 1776 Dec 13 01:57:02.888938 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:57:02.888945 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:57:02.888952 kernel: x86: Booting SMP configuration: Dec 13 01:57:02.888958 kernel: .... node #0, CPUs: #1 Dec 13 01:57:02.888965 kernel: kvm-clock: cpu 1, msr 6519b041, secondary cpu clock Dec 13 01:57:02.888972 kernel: kvm-guest: setup async PF for cpu 1 Dec 13 01:57:02.888980 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Dec 13 01:57:02.888987 kernel: #2 Dec 13 01:57:02.888994 kernel: kvm-clock: cpu 2, msr 6519b081, secondary cpu clock Dec 13 01:57:02.889000 kernel: kvm-guest: setup async PF for cpu 2 Dec 13 01:57:02.889007 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Dec 13 01:57:02.889014 kernel: #3 Dec 13 01:57:02.889021 kernel: kvm-clock: cpu 3, msr 6519b0c1, secondary cpu clock Dec 13 01:57:02.889027 kernel: kvm-guest: setup async PF for cpu 3 Dec 13 01:57:02.889034 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Dec 13 01:57:02.889041 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:57:02.889048 kernel: smpboot: Max logical packages: 1 Dec 13 01:57:02.889055 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:57:02.889062 kernel: devtmpfs: initialized Dec 13 01:57:02.889069 kernel: x86/mm: Memory block size: 128MB Dec 13 01:57:02.889076 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 13 01:57:02.889083 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 13 01:57:02.889090 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Dec 13 01:57:02.889096 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 13 01:57:02.889103 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 13 01:57:02.889119 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:57:02.889126 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:57:02.889133 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:57:02.889139 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:57:02.889146 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:57:02.889153 kernel: audit: type=2000 audit(1734055022.567:1): state=initialized audit_enabled=0 res=1 Dec 13 01:57:02.889169 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:57:02.889177 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:57:02.889184 kernel: cpuidle: using governor menu Dec 13 01:57:02.889192 kernel: ACPI: bus type PCI registered Dec 13 01:57:02.889199 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:57:02.889205 kernel: dca service started, version 1.12.1 Dec 13 01:57:02.889212 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:57:02.889219 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 01:57:02.889226 kernel: PCI: Using configuration type 1 for base access Dec 13 01:57:02.889233 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:57:02.889240 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:57:02.889247 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:57:02.889254 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:57:02.889261 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:57:02.889268 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:57:02.889274 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:57:02.889281 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 01:57:02.889288 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 01:57:02.889295 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 01:57:02.889301 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:57:02.889308 kernel: ACPI: Interpreter enabled Dec 13 01:57:02.889316 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:57:02.889323 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:57:02.889330 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:57:02.889337 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:57:02.889343 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:57:02.889451 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:57:02.889524 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:57:02.889594 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:57:02.889604 kernel: PCI host bridge to bus 0000:00 Dec 13 01:57:02.889678 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:57:02.889760 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:57:02.889821 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:57:02.889880 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:57:02.889939 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:57:02.890006 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Dec 13 01:57:02.890066 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:57:02.890155 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:57:02.890246 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:57:02.890316 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Dec 13 01:57:02.890385 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Dec 13 01:57:02.890453 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Dec 13 01:57:02.890523 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Dec 13 01:57:02.890590 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:57:02.890671 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:57:02.890745 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Dec 13 01:57:02.890815 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Dec 13 01:57:02.890883 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Dec 13 01:57:02.890963 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:57:02.891037 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Dec 13 01:57:02.891262 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Dec 13 01:57:02.891338 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Dec 13 01:57:02.891726 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:57:02.891802 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Dec 13 01:57:02.891870 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Dec 13 01:57:02.891941 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Dec 13 01:57:02.892013 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Dec 13 01:57:02.892087 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:57:02.892212 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:57:02.892288 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:57:02.892357 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Dec 13 01:57:02.892424 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Dec 13 01:57:02.892498 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:57:02.892563 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Dec 13 01:57:02.892573 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:57:02.892580 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:57:02.892587 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:57:02.892594 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:57:02.892601 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:57:02.892607 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:57:02.892616 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:57:02.892623 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:57:02.892630 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:57:02.892637 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:57:02.892644 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:57:02.892651 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:57:02.892657 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:57:02.892664 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:57:02.892671 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:57:02.892679 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:57:02.892686 kernel: iommu: Default domain type: Translated Dec 13 01:57:02.892693 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:57:02.892760 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:57:02.892827 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:57:02.892895 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:57:02.892904 kernel: vgaarb: loaded Dec 13 01:57:02.892912 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:57:02.892919 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:57:02.892928 kernel: PTP clock support registered Dec 13 01:57:02.892935 kernel: Registered efivars operations Dec 13 01:57:02.892942 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:57:02.892949 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:57:02.892956 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 13 01:57:02.892962 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Dec 13 01:57:02.892969 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Dec 13 01:57:02.892976 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Dec 13 01:57:02.892983 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Dec 13 01:57:02.892991 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Dec 13 01:57:02.892998 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:57:02.893005 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:57:02.893012 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:57:02.893019 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:57:02.893026 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:57:02.893033 kernel: pnp: PnP ACPI init Dec 13 01:57:02.893119 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:57:02.893135 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:57:02.893142 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:57:02.893149 kernel: NET: Registered PF_INET protocol family Dec 13 01:57:02.893156 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:57:02.893173 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:57:02.893180 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:57:02.893187 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:57:02.893194 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 01:57:02.893201 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:57:02.893211 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:57:02.893218 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:57:02.893225 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:57:02.893232 kernel: NET: Registered PF_XDP protocol family Dec 13 01:57:02.893308 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Dec 13 01:57:02.893387 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Dec 13 01:57:02.893451 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:57:02.893513 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:57:02.893577 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:57:02.893637 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:57:02.893696 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:57:02.893755 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Dec 13 01:57:02.893764 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:57:02.893771 kernel: Initialise system trusted keyrings Dec 13 01:57:02.893778 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:57:02.893785 kernel: Key type asymmetric registered Dec 13 01:57:02.893794 kernel: Asymmetric key parser 'x509' registered Dec 13 01:57:02.893802 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 01:57:02.893818 kernel: io scheduler mq-deadline registered Dec 13 01:57:02.893827 kernel: io scheduler kyber registered Dec 13 01:57:02.893834 kernel: io scheduler bfq registered Dec 13 01:57:02.893841 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:57:02.893849 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:57:02.893856 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:57:02.893863 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:57:02.893872 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:57:02.893879 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:57:02.893886 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:57:02.893894 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:57:02.893901 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:57:02.893973 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:57:02.893983 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:57:02.894044 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:57:02.894109 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:57:02 UTC (1734055022) Dec 13 01:57:02.894192 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:57:02.894202 kernel: efifb: probing for efifb Dec 13 01:57:02.894209 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 13 01:57:02.894217 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 13 01:57:02.894224 kernel: efifb: scrolling: redraw Dec 13 01:57:02.894231 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:57:02.894238 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 01:57:02.894245 kernel: fb0: EFI VGA frame buffer device Dec 13 01:57:02.894255 kernel: pstore: Registered efi as persistent store backend Dec 13 01:57:02.894262 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:57:02.894269 kernel: Segment Routing with IPv6 Dec 13 01:57:02.894278 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:57:02.894285 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:57:02.894292 kernel: Key type dns_resolver registered Dec 13 01:57:02.894301 kernel: IPI shorthand broadcast: enabled Dec 13 01:57:02.894308 kernel: sched_clock: Marking stable (482234162, 139279541)->(687283257, -65769554) Dec 13 01:57:02.894315 kernel: registered taskstats version 1 Dec 13 01:57:02.894322 kernel: Loading compiled-in X.509 certificates Dec 13 01:57:02.894329 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 01:57:02.894337 kernel: Key type .fscrypt registered Dec 13 01:57:02.894344 kernel: Key type fscrypt-provisioning registered Dec 13 01:57:02.894351 kernel: pstore: Using crash dump compression: deflate Dec 13 01:57:02.894359 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:57:02.894366 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:57:02.894373 kernel: ima: No architecture policies found Dec 13 01:57:02.894380 kernel: clk: Disabling unused clocks Dec 13 01:57:02.894388 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 01:57:02.894395 kernel: Write protecting the kernel read-only data: 28672k Dec 13 01:57:02.894411 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 01:57:02.894418 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 01:57:02.894425 kernel: Run /init as init process Dec 13 01:57:02.894432 kernel: with arguments: Dec 13 01:57:02.894442 kernel: /init Dec 13 01:57:02.894449 kernel: with environment: Dec 13 01:57:02.894456 kernel: HOME=/ Dec 13 01:57:02.894462 kernel: TERM=linux Dec 13 01:57:02.894469 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:57:02.894479 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:57:02.894488 systemd[1]: Detected virtualization kvm. Dec 13 01:57:02.894495 systemd[1]: Detected architecture x86-64. Dec 13 01:57:02.894504 systemd[1]: Running in initrd. Dec 13 01:57:02.894511 systemd[1]: No hostname configured, using default hostname. Dec 13 01:57:02.894519 systemd[1]: Hostname set to . Dec 13 01:57:02.894527 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:57:02.894534 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:57:02.894542 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:57:02.894549 systemd[1]: Reached target cryptsetup.target. Dec 13 01:57:02.894556 systemd[1]: Reached target paths.target. Dec 13 01:57:02.894565 systemd[1]: Reached target slices.target. Dec 13 01:57:02.894572 systemd[1]: Reached target swap.target. Dec 13 01:57:02.894579 systemd[1]: Reached target timers.target. Dec 13 01:57:02.894587 systemd[1]: Listening on iscsid.socket. Dec 13 01:57:02.894595 systemd[1]: Listening on iscsiuio.socket. Dec 13 01:57:02.894602 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 01:57:02.894610 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 01:57:02.894619 systemd[1]: Listening on systemd-journald.socket. Dec 13 01:57:02.894626 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:57:02.894634 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:57:02.894641 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:57:02.894649 systemd[1]: Reached target sockets.target. Dec 13 01:57:02.894656 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:57:02.894664 systemd[1]: Finished network-cleanup.service. Dec 13 01:57:02.894671 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:57:02.894679 systemd[1]: Starting systemd-journald.service... Dec 13 01:57:02.894687 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:57:02.894695 systemd[1]: Starting systemd-resolved.service... Dec 13 01:57:02.894702 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 01:57:02.894710 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:57:02.894717 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:57:02.894726 kernel: audit: type=1130 audit(1734055022.886:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:02.894733 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 01:57:02.894745 systemd-journald[197]: Journal started Dec 13 01:57:02.894781 systemd-journald[197]: Runtime Journal (/run/log/journal/3dac2ff4d6b043c78a91178ff01c2c2c) is 6.0M, max 48.4M, 42.4M free. Dec 13 01:57:02.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:02.931176 systemd[1]: Started systemd-journald.service. Dec 13 01:57:02.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:02.931747 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 01:57:02.938884 kernel: audit: type=1130 audit(1734055022.931:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:02.938899 kernel: audit: type=1130 audit(1734055022.934:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:02.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:02.934363 systemd-modules-load[198]: Inserted module 'overlay' Dec 13 01:57:02.943585 kernel: audit: type=1130 audit(1734055022.939:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:02.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:02.934536 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 01:57:02.943542 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 01:57:02.955230 systemd-resolved[199]: Positive Trust Anchors: Dec 13 01:57:02.955518 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:57:02.955554 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:57:02.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:02.957654 systemd-resolved[199]: Defaulting to hostname 'linux'. Dec 13 01:57:03.012307 kernel: audit: type=1130 audit(1734055022.999:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.012330 kernel: audit: type=1130 audit(1734055023.009:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:02.958326 systemd[1]: Started systemd-resolved.service. Dec 13 01:57:03.000077 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 01:57:03.009407 systemd[1]: Reached target nss-lookup.target. Dec 13 01:57:03.014604 systemd[1]: Starting dracut-cmdline.service... Dec 13 01:57:03.025469 dracut-cmdline[214]: dracut-dracut-053 Dec 13 01:57:03.064907 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:57:03.071386 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:57:03.122870 systemd-modules-load[198]: Inserted module 'br_netfilter' Dec 13 01:57:03.123819 kernel: Bridge firewalling registered Dec 13 01:57:03.139182 kernel: SCSI subsystem initialized Dec 13 01:57:03.158271 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:57:03.158288 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:57:03.159550 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 01:57:03.160178 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:57:03.162219 systemd-modules-load[198]: Inserted module 'dm_multipath' Dec 13 01:57:03.162879 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:57:03.167586 kernel: audit: type=1130 audit(1734055023.163:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.167578 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:57:03.175524 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:57:03.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.180184 kernel: audit: type=1130 audit(1734055023.176:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.192211 kernel: iscsi: registered transport (tcp) Dec 13 01:57:03.219234 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:57:03.219329 kernel: QLogic iSCSI HBA Driver Dec 13 01:57:03.247324 systemd[1]: Finished dracut-cmdline.service. Dec 13 01:57:03.260452 kernel: audit: type=1130 audit(1734055023.247:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.248237 systemd[1]: Starting dracut-pre-udev.service... Dec 13 01:57:03.303209 kernel: raid6: avx2x4 gen() 30139 MB/s Dec 13 01:57:03.320222 kernel: raid6: avx2x4 xor() 7218 MB/s Dec 13 01:57:03.339184 kernel: raid6: avx2x2 gen() 32248 MB/s Dec 13 01:57:03.356191 kernel: raid6: avx2x2 xor() 18999 MB/s Dec 13 01:57:03.373188 kernel: raid6: avx2x1 gen() 26074 MB/s Dec 13 01:57:03.396195 kernel: raid6: avx2x1 xor() 14940 MB/s Dec 13 01:57:03.413186 kernel: raid6: sse2x4 gen() 14712 MB/s Dec 13 01:57:03.431186 kernel: raid6: sse2x4 xor() 7053 MB/s Dec 13 01:57:03.458206 kernel: raid6: sse2x2 gen() 16231 MB/s Dec 13 01:57:03.497182 kernel: raid6: sse2x2 xor() 9831 MB/s Dec 13 01:57:03.514184 kernel: raid6: sse2x1 gen() 12416 MB/s Dec 13 01:57:03.531574 kernel: raid6: sse2x1 xor() 7790 MB/s Dec 13 01:57:03.531590 kernel: raid6: using algorithm avx2x2 gen() 32248 MB/s Dec 13 01:57:03.531600 kernel: raid6: .... xor() 18999 MB/s, rmw enabled Dec 13 01:57:03.532293 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:57:03.564184 kernel: xor: automatically using best checksumming function avx Dec 13 01:57:03.652193 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 01:57:03.660416 systemd[1]: Finished dracut-pre-udev.service. Dec 13 01:57:03.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.661000 audit: BPF prog-id=7 op=LOAD Dec 13 01:57:03.661000 audit: BPF prog-id=8 op=LOAD Dec 13 01:57:03.662510 systemd[1]: Starting systemd-udevd.service... Dec 13 01:57:03.674029 systemd-udevd[401]: Using default interface naming scheme 'v252'. Dec 13 01:57:03.677752 systemd[1]: Started systemd-udevd.service. Dec 13 01:57:03.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.679234 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 01:57:03.688798 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Dec 13 01:57:03.711198 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 01:57:03.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.713571 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:57:03.744067 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:57:03.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.772437 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:57:03.781661 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:57:03.781674 kernel: GPT:9289727 != 19775487 Dec 13 01:57:03.781683 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:57:03.781691 kernel: GPT:9289727 != 19775487 Dec 13 01:57:03.781699 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:57:03.781708 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:57:03.788180 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:57:03.798180 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:57:03.798201 kernel: libata version 3.00 loaded. Dec 13 01:57:03.798211 kernel: AES CTR mode by8 optimization enabled Dec 13 01:57:03.811618 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:57:03.860516 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:57:03.860531 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:57:03.860617 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:57:03.860688 kernel: scsi host0: ahci Dec 13 01:57:03.860784 kernel: scsi host1: ahci Dec 13 01:57:03.860866 kernel: scsi host2: ahci Dec 13 01:57:03.860954 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (460) Dec 13 01:57:03.860963 kernel: scsi host3: ahci Dec 13 01:57:03.861047 kernel: scsi host4: ahci Dec 13 01:57:03.861139 kernel: scsi host5: ahci Dec 13 01:57:03.861320 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Dec 13 01:57:03.861331 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Dec 13 01:57:03.861340 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Dec 13 01:57:03.861351 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Dec 13 01:57:03.861360 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Dec 13 01:57:03.861369 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Dec 13 01:57:03.818047 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 01:57:03.840097 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 01:57:03.844941 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 01:57:03.854365 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 01:57:03.866746 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:57:03.867429 systemd[1]: Starting disk-uuid.service... Dec 13 01:57:03.996809 disk-uuid[547]: Primary Header is updated. Dec 13 01:57:03.996809 disk-uuid[547]: Secondary Entries is updated. Dec 13 01:57:03.996809 disk-uuid[547]: Secondary Header is updated. Dec 13 01:57:04.000265 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:57:04.173858 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:57:04.173916 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:57:04.173928 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:57:04.175795 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:57:04.176183 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:57:04.177188 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:57:04.178197 kernel: ata3.00: applying bridge limits Dec 13 01:57:04.179180 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:57:04.180188 kernel: ata3.00: configured for UDMA/100 Dec 13 01:57:04.180199 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:57:04.209179 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:57:04.226674 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:57:04.226686 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:57:05.008192 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:57:05.008586 disk-uuid[548]: The operation has completed successfully. Dec 13 01:57:05.030457 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:57:05.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.030550 systemd[1]: Finished disk-uuid.service. Dec 13 01:57:05.037713 systemd[1]: Starting verity-setup.service... Dec 13 01:57:05.051186 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:57:05.068036 systemd[1]: Found device dev-mapper-usr.device. Dec 13 01:57:05.069313 systemd[1]: Mounting sysusr-usr.mount... Dec 13 01:57:05.070955 systemd[1]: Finished verity-setup.service. Dec 13 01:57:05.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.125190 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 01:57:05.125761 systemd[1]: Mounted sysusr-usr.mount. Dec 13 01:57:05.127343 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 01:57:05.128001 systemd[1]: Starting ignition-setup.service... Dec 13 01:57:05.129935 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 01:57:05.136449 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:57:05.136471 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:57:05.136481 kernel: BTRFS info (device vda6): has skinny extents Dec 13 01:57:05.144734 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:57:05.151870 systemd[1]: Finished ignition-setup.service. Dec 13 01:57:05.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.153416 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 01:57:05.186785 ignition[651]: Ignition 2.14.0 Dec 13 01:57:05.187118 ignition[651]: Stage: fetch-offline Dec 13 01:57:05.187174 ignition[651]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:57:05.187182 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:57:05.187272 ignition[651]: parsed url from cmdline: "" Dec 13 01:57:05.187274 ignition[651]: no config URL provided Dec 13 01:57:05.187278 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:57:05.187284 ignition[651]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:57:05.187478 ignition[651]: op(1): [started] loading QEMU firmware config module Dec 13 01:57:05.187486 ignition[651]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:57:05.190908 ignition[651]: op(1): [finished] loading QEMU firmware config module Dec 13 01:57:05.191846 ignition[651]: parsing config with SHA512: 486aca6b3fc3f7e35ece04e699f7bd511b4a9c710e133a7b21d4a2db30548fac44bab45f691e0c5bbe4c0e69e01ebaa05faa78f50dbce5b1253d02a9f77fc039 Dec 13 01:57:05.198457 unknown[651]: fetched base config from "system" Dec 13 01:57:05.199751 unknown[651]: fetched user config from "qemu" Dec 13 01:57:05.200891 ignition[651]: fetch-offline: fetch-offline passed Dec 13 01:57:05.201799 ignition[651]: Ignition finished successfully Dec 13 01:57:05.203107 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 01:57:05.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.204000 audit: BPF prog-id=9 op=LOAD Dec 13 01:57:05.205268 systemd[1]: Starting systemd-networkd.service... Dec 13 01:57:05.205480 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 01:57:05.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.224826 systemd-networkd[730]: lo: Link UP Dec 13 01:57:05.224835 systemd-networkd[730]: lo: Gained carrier Dec 13 01:57:05.225298 systemd-networkd[730]: Enumeration completed Dec 13 01:57:05.225494 systemd-networkd[730]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:57:05.226760 systemd[1]: Started systemd-networkd.service. Dec 13 01:57:05.227135 systemd-networkd[730]: eth0: Link UP Dec 13 01:57:05.227138 systemd-networkd[730]: eth0: Gained carrier Dec 13 01:57:05.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.231659 systemd[1]: Reached target network.target. Dec 13 01:57:05.233190 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:57:05.235516 systemd[1]: Starting ignition-kargs.service... Dec 13 01:57:05.237537 systemd[1]: Starting iscsiuio.service... Dec 13 01:57:05.242343 systemd[1]: Started iscsiuio.service. Dec 13 01:57:05.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.243960 systemd[1]: Starting iscsid.service... Dec 13 01:57:05.244273 systemd-networkd[730]: eth0: DHCPv4 address 10.0.0.123/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:57:05.245024 ignition[732]: Ignition 2.14.0 Dec 13 01:57:05.245029 ignition[732]: Stage: kargs Dec 13 01:57:05.248626 iscsid[741]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:57:05.248626 iscsid[741]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 01:57:05.248626 iscsid[741]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 01:57:05.248626 iscsid[741]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 01:57:05.248626 iscsid[741]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 01:57:05.248626 iscsid[741]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:57:05.248626 iscsid[741]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 01:57:05.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.245156 ignition[732]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:57:05.248527 systemd[1]: Started iscsid.service. Dec 13 01:57:05.245173 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:57:05.249974 systemd[1]: Starting dracut-initqueue.service... Dec 13 01:57:05.246127 ignition[732]: kargs: kargs passed Dec 13 01:57:05.259008 systemd[1]: Finished dracut-initqueue.service. Dec 13 01:57:05.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.246200 ignition[732]: Ignition finished successfully Dec 13 01:57:05.260063 systemd[1]: Reached target remote-fs-pre.target. Dec 13 01:57:05.262724 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:57:05.264405 systemd[1]: Reached target remote-fs.target. Dec 13 01:57:05.266885 systemd[1]: Starting dracut-pre-mount.service... Dec 13 01:57:05.268562 systemd[1]: Finished ignition-kargs.service. Dec 13 01:57:05.270441 systemd[1]: Starting ignition-disks.service... Dec 13 01:57:05.278269 ignition[751]: Ignition 2.14.0 Dec 13 01:57:05.278276 ignition[751]: Stage: disks Dec 13 01:57:05.278352 ignition[751]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:57:05.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.279816 systemd[1]: Finished ignition-disks.service. Dec 13 01:57:05.278359 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:57:05.280400 systemd[1]: Reached target initrd-root-device.target. Dec 13 01:57:05.278997 ignition[751]: disks: disks passed Dec 13 01:57:05.282668 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:57:05.279029 ignition[751]: Ignition finished successfully Dec 13 01:57:05.284215 systemd[1]: Reached target local-fs.target. Dec 13 01:57:05.285999 systemd[1]: Reached target sysinit.target. Dec 13 01:57:05.287594 systemd[1]: Reached target basic.target. Dec 13 01:57:05.293032 systemd[1]: Finished dracut-pre-mount.service. Dec 13 01:57:05.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.294675 systemd[1]: Starting systemd-fsck-root.service... Dec 13 01:57:05.305717 systemd-fsck[763]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 01:57:05.311014 systemd[1]: Finished systemd-fsck-root.service. Dec 13 01:57:05.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.311778 systemd[1]: Mounting sysroot.mount... Dec 13 01:57:05.318182 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 01:57:05.318769 systemd[1]: Mounted sysroot.mount. Dec 13 01:57:05.318867 systemd[1]: Reached target initrd-root-fs.target. Dec 13 01:57:05.321261 systemd[1]: Mounting sysroot-usr.mount... Dec 13 01:57:05.322022 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 01:57:05.322070 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:57:05.322089 systemd[1]: Reached target ignition-diskful.target. Dec 13 01:57:05.324513 systemd[1]: Mounted sysroot-usr.mount. Dec 13 01:57:05.326404 systemd[1]: Starting initrd-setup-root.service... Dec 13 01:57:05.329559 initrd-setup-root[773]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:57:05.334740 initrd-setup-root[781]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:57:05.337203 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:57:05.339838 initrd-setup-root[797]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:57:05.366254 systemd[1]: Finished initrd-setup-root.service. Dec 13 01:57:05.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.368713 systemd[1]: Starting ignition-mount.service... Dec 13 01:57:05.370962 systemd[1]: Starting sysroot-boot.service... Dec 13 01:57:05.373250 bash[814]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 01:57:05.379858 ignition[815]: INFO : Ignition 2.14.0 Dec 13 01:57:05.380851 ignition[815]: INFO : Stage: mount Dec 13 01:57:05.381692 ignition[815]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:57:05.381692 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:57:05.384775 ignition[815]: INFO : mount: mount passed Dec 13 01:57:05.384775 ignition[815]: INFO : Ignition finished successfully Dec 13 01:57:05.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.383261 systemd[1]: Finished ignition-mount.service. Dec 13 01:57:05.388647 systemd[1]: Finished sysroot-boot.service. Dec 13 01:57:05.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:06.076753 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 01:57:06.082185 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (824) Dec 13 01:57:06.084396 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:57:06.084417 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:57:06.084431 kernel: BTRFS info (device vda6): has skinny extents Dec 13 01:57:06.087937 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 01:57:06.090207 systemd[1]: Starting ignition-files.service... Dec 13 01:57:06.102127 ignition[844]: INFO : Ignition 2.14.0 Dec 13 01:57:06.102127 ignition[844]: INFO : Stage: files Dec 13 01:57:06.104080 ignition[844]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:57:06.104080 ignition[844]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:57:06.104080 ignition[844]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:57:06.107856 ignition[844]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:57:06.107856 ignition[844]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:57:06.107856 ignition[844]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:57:06.107856 ignition[844]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:57:06.107856 ignition[844]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:57:06.107856 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:57:06.107856 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:57:06.107856 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:57:06.107856 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:57:06.107856 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:57:06.107856 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:57:06.107856 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:57:06.107856 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 01:57:06.106086 unknown[844]: wrote ssh authorized keys file for user: core Dec 13 01:57:06.326258 systemd-networkd[730]: eth0: Gained IPv6LL Dec 13 01:57:06.585410 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 01:57:07.072200 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:57:07.072200 ignition[844]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Dec 13 01:57:07.076298 ignition[844]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:57:07.076298 ignition[844]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:57:07.076298 ignition[844]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Dec 13 01:57:07.076298 ignition[844]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:57:07.076298 ignition[844]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:57:07.103892 ignition[844]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:57:07.105579 ignition[844]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:57:07.107099 ignition[844]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:57:07.109295 ignition[844]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:57:07.109295 ignition[844]: INFO : files: files passed Dec 13 01:57:07.112031 ignition[844]: INFO : Ignition finished successfully Dec 13 01:57:07.113712 systemd[1]: Finished ignition-files.service. Dec 13 01:57:07.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.115489 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 01:57:07.115576 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 01:57:07.117528 systemd[1]: Starting ignition-quench.service... Dec 13 01:57:07.121305 initrd-setup-root-after-ignition[868]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 01:57:07.122231 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 01:57:07.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.124960 initrd-setup-root-after-ignition[870]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:57:07.123994 systemd[1]: Reached target ignition-complete.target. Dec 13 01:57:07.126414 systemd[1]: Starting initrd-parse-etc.service... Dec 13 01:57:07.131629 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:57:07.131727 systemd[1]: Finished ignition-quench.service. Dec 13 01:57:07.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.137545 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:57:07.137657 systemd[1]: Finished initrd-parse-etc.service. Dec 13 01:57:07.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.140390 systemd[1]: Reached target initrd-fs.target. Dec 13 01:57:07.148656 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 13 01:57:07.148680 kernel: audit: type=1130 audit(1734055027.140:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.148691 kernel: audit: type=1131 audit(1734055027.140:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.148626 systemd[1]: Reached target initrd.target. Dec 13 01:57:07.150411 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 01:57:07.152967 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 01:57:07.163721 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 01:57:07.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.166884 systemd[1]: Starting initrd-cleanup.service... Dec 13 01:57:07.170766 kernel: audit: type=1130 audit(1734055027.165:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.177519 systemd[1]: Stopped target nss-lookup.target. Dec 13 01:57:07.179515 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 01:57:07.181358 systemd[1]: Stopped target timers.target. Dec 13 01:57:07.182911 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:57:07.183918 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 01:57:07.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.185670 systemd[1]: Stopped target initrd.target. Dec 13 01:57:07.190023 kernel: audit: type=1131 audit(1734055027.185:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.190073 systemd[1]: Stopped target basic.target. Dec 13 01:57:07.191621 systemd[1]: Stopped target ignition-complete.target. Dec 13 01:57:07.193432 systemd[1]: Stopped target ignition-diskful.target. Dec 13 01:57:07.195200 systemd[1]: Stopped target initrd-root-device.target. Dec 13 01:57:07.196991 systemd[1]: Stopped target remote-fs.target. Dec 13 01:57:07.198635 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 01:57:07.200368 systemd[1]: Stopped target sysinit.target. Dec 13 01:57:07.201923 systemd[1]: Stopped target local-fs.target. Dec 13 01:57:07.203506 systemd[1]: Stopped target local-fs-pre.target. Dec 13 01:57:07.205175 systemd[1]: Stopped target swap.target. Dec 13 01:57:07.206637 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:57:07.207634 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 01:57:07.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.209335 systemd[1]: Stopped target cryptsetup.target. Dec 13 01:57:07.213669 kernel: audit: type=1131 audit(1734055027.209:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.213708 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:57:07.214693 systemd[1]: Stopped dracut-initqueue.service. Dec 13 01:57:07.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.216368 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:57:07.220186 kernel: audit: type=1131 audit(1734055027.216:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.216458 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 01:57:07.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.221984 systemd[1]: Stopped target paths.target. Dec 13 01:57:07.226290 kernel: audit: type=1131 audit(1734055027.221:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.226314 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:57:07.229205 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 01:57:07.231034 systemd[1]: Stopped target slices.target. Dec 13 01:57:07.232574 systemd[1]: Stopped target sockets.target. Dec 13 01:57:07.234148 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:57:07.235316 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 01:57:07.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.237317 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:57:07.241356 kernel: audit: type=1131 audit(1734055027.237:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.237400 systemd[1]: Stopped ignition-files.service. Dec 13 01:57:07.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.243635 systemd[1]: Stopping ignition-mount.service... Dec 13 01:57:07.247214 kernel: audit: type=1131 audit(1734055027.242:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.247344 systemd[1]: Stopping iscsid.service... Dec 13 01:57:07.248716 iscsid[741]: iscsid shutting down. Dec 13 01:57:07.249538 ignition[884]: INFO : Ignition 2.14.0 Dec 13 01:57:07.249538 ignition[884]: INFO : Stage: umount Dec 13 01:57:07.251360 ignition[884]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:57:07.251360 ignition[884]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:57:07.251360 ignition[884]: INFO : umount: umount passed Dec 13 01:57:07.251360 ignition[884]: INFO : Ignition finished successfully Dec 13 01:57:07.256232 systemd[1]: Stopping sysroot-boot.service... Dec 13 01:57:07.258036 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:57:07.259440 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 01:57:07.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.261813 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:57:07.266155 kernel: audit: type=1131 audit(1734055027.261:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.261971 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 01:57:07.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.271406 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:57:07.273214 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 01:57:07.274389 systemd[1]: Stopped iscsid.service. Dec 13 01:57:07.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.276623 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:57:07.277922 systemd[1]: Stopped ignition-mount.service. Dec 13 01:57:07.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.280281 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:57:07.281434 systemd[1]: Closed iscsid.socket. Dec 13 01:57:07.283073 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:57:07.283122 systemd[1]: Stopped ignition-disks.service. Dec 13 01:57:07.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.286174 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:57:07.287297 systemd[1]: Stopped ignition-kargs.service. Dec 13 01:57:07.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.289255 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:57:07.289299 systemd[1]: Stopped ignition-setup.service. Dec 13 01:57:07.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.292464 systemd[1]: Stopping iscsiuio.service... Dec 13 01:57:07.294547 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:57:07.295815 systemd[1]: Finished initrd-cleanup.service. Dec 13 01:57:07.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.298024 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 01:57:07.299123 systemd[1]: Stopped iscsiuio.service. Dec 13 01:57:07.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.301800 systemd[1]: Stopped target network.target. Dec 13 01:57:07.303807 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:57:07.303851 systemd[1]: Closed iscsiuio.socket. Dec 13 01:57:07.306177 systemd[1]: Stopping systemd-networkd.service... Dec 13 01:57:07.307902 systemd[1]: Stopping systemd-resolved.service... Dec 13 01:57:07.316566 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:57:07.316686 systemd[1]: Stopped systemd-resolved.service. Dec 13 01:57:07.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.318208 systemd-networkd[730]: eth0: DHCPv6 lease lost Dec 13 01:57:07.319450 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:57:07.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.319543 systemd[1]: Stopped systemd-networkd.service. Dec 13 01:57:07.323000 audit: BPF prog-id=6 op=UNLOAD Dec 13 01:57:07.320986 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:57:07.321022 systemd[1]: Closed systemd-networkd.socket. Dec 13 01:57:07.326057 systemd[1]: Stopping network-cleanup.service... Dec 13 01:57:07.325000 audit: BPF prog-id=9 op=UNLOAD Dec 13 01:57:07.327693 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:57:07.327738 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 01:57:07.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.330602 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:57:07.330637 systemd[1]: Stopped systemd-sysctl.service. Dec 13 01:57:07.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.333130 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:57:07.333178 systemd[1]: Stopped systemd-modules-load.service. Dec 13 01:57:07.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.335876 systemd[1]: Stopping systemd-udevd.service... Dec 13 01:57:07.338242 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 01:57:07.341675 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:57:07.342665 systemd[1]: Stopped network-cleanup.service. Dec 13 01:57:07.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.344513 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:57:07.345502 systemd[1]: Stopped systemd-udevd.service. Dec 13 01:57:07.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.347450 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:57:07.347484 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 01:57:07.350012 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:57:07.350041 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 01:57:07.352604 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:57:07.352637 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 01:57:07.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.355131 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:57:07.355172 systemd[1]: Stopped dracut-cmdline.service. Dec 13 01:57:07.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.357527 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:57:07.357558 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 01:57:07.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.360614 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 01:57:07.362347 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:57:07.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.362385 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 01:57:07.364441 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:57:07.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.365290 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 01:57:07.367093 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:57:07.367123 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 01:57:07.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.370983 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 01:57:07.372687 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:57:07.373752 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 01:57:07.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.510086 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:57:07.510211 systemd[1]: Stopped sysroot-boot.service. Dec 13 01:57:07.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.512736 systemd[1]: Reached target initrd-switch-root.target. Dec 13 01:57:07.514488 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:57:07.514524 systemd[1]: Stopped initrd-setup-root.service. Dec 13 01:57:07.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.517693 systemd[1]: Starting initrd-switch-root.service... Dec 13 01:57:07.533751 systemd[1]: Switching root. Dec 13 01:57:07.564676 systemd-journald[197]: Journal stopped Dec 13 01:57:10.979297 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Dec 13 01:57:10.979356 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 01:57:10.979368 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 01:57:10.979386 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 01:57:10.979395 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:57:10.979405 kernel: SELinux: policy capability open_perms=1 Dec 13 01:57:10.979419 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:57:10.979428 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:57:10.979437 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:57:10.979450 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:57:10.979459 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:57:10.979472 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:57:10.979482 systemd[1]: Successfully loaded SELinux policy in 57.265ms. Dec 13 01:57:10.979497 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.177ms. Dec 13 01:57:10.979509 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:57:10.979520 systemd[1]: Detected virtualization kvm. Dec 13 01:57:10.979529 systemd[1]: Detected architecture x86-64. Dec 13 01:57:10.979539 systemd[1]: Detected first boot. Dec 13 01:57:10.979550 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:57:10.979559 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 01:57:10.979569 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:57:10.979581 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:57:10.979592 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:57:10.979603 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:57:10.979613 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:57:10.979623 systemd[1]: Stopped initrd-switch-root.service. Dec 13 01:57:10.979633 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:57:10.979643 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 01:57:10.979653 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 01:57:10.979664 systemd[1]: Created slice system-getty.slice. Dec 13 01:57:10.979674 systemd[1]: Created slice system-modprobe.slice. Dec 13 01:57:10.979684 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 01:57:10.979694 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 01:57:10.979704 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 01:57:10.979714 systemd[1]: Created slice user.slice. Dec 13 01:57:10.979723 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:57:10.979734 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 01:57:10.979744 systemd[1]: Set up automount boot.automount. Dec 13 01:57:10.979755 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 01:57:10.979765 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 01:57:10.979774 systemd[1]: Stopped target initrd-fs.target. Dec 13 01:57:10.979785 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 01:57:10.979795 systemd[1]: Reached target integritysetup.target. Dec 13 01:57:10.979806 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:57:10.979816 systemd[1]: Reached target remote-fs.target. Dec 13 01:57:10.979825 systemd[1]: Reached target slices.target. Dec 13 01:57:10.979835 systemd[1]: Reached target swap.target. Dec 13 01:57:10.979845 systemd[1]: Reached target torcx.target. Dec 13 01:57:10.979855 systemd[1]: Reached target veritysetup.target. Dec 13 01:57:10.979865 systemd[1]: Listening on systemd-coredump.socket. Dec 13 01:57:10.979875 systemd[1]: Listening on systemd-initctl.socket. Dec 13 01:57:10.979885 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:57:10.979902 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:57:10.979912 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:57:10.979922 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 01:57:10.979932 systemd[1]: Mounting dev-hugepages.mount... Dec 13 01:57:10.979942 systemd[1]: Mounting dev-mqueue.mount... Dec 13 01:57:10.979951 systemd[1]: Mounting media.mount... Dec 13 01:57:10.979961 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:10.979971 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 01:57:10.979981 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 01:57:10.979992 systemd[1]: Mounting tmp.mount... Dec 13 01:57:10.980001 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 01:57:10.980011 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:57:10.980021 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:57:10.980031 systemd[1]: Starting modprobe@configfs.service... Dec 13 01:57:10.980041 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:57:10.980051 systemd[1]: Starting modprobe@drm.service... Dec 13 01:57:10.980061 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:57:10.980071 systemd[1]: Starting modprobe@fuse.service... Dec 13 01:57:10.980082 systemd[1]: Starting modprobe@loop.service... Dec 13 01:57:10.980093 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:57:10.980103 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:57:10.980113 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 01:57:10.980123 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:57:10.980134 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:57:10.980143 kernel: fuse: init (API version 7.34) Dec 13 01:57:10.980156 systemd[1]: Stopped systemd-journald.service. Dec 13 01:57:10.980177 kernel: loop: module loaded Dec 13 01:57:10.980188 systemd[1]: Starting systemd-journald.service... Dec 13 01:57:10.980198 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:57:10.980208 systemd[1]: Starting systemd-network-generator.service... Dec 13 01:57:10.980218 systemd[1]: Starting systemd-remount-fs.service... Dec 13 01:57:10.980228 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:57:10.980238 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:57:10.980247 systemd[1]: Stopped verity-setup.service. Dec 13 01:57:10.980257 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:10.980270 systemd-journald[998]: Journal started Dec 13 01:57:10.980308 systemd-journald[998]: Runtime Journal (/run/log/journal/3dac2ff4d6b043c78a91178ff01c2c2c) is 6.0M, max 48.4M, 42.4M free. Dec 13 01:57:07.632000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:57:07.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:57:07.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:57:07.907000 audit: BPF prog-id=10 op=LOAD Dec 13 01:57:07.907000 audit: BPF prog-id=10 op=UNLOAD Dec 13 01:57:07.907000 audit: BPF prog-id=11 op=LOAD Dec 13 01:57:07.907000 audit: BPF prog-id=11 op=UNLOAD Dec 13 01:57:07.938000 audit[917]: AVC avc: denied { associate } for pid=917 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 01:57:07.938000 audit[917]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=900 pid=917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:57:07.938000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 01:57:07.940000 audit[917]: AVC avc: denied { associate } for pid=917 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 01:57:07.940000 audit[917]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059b9 a2=1ed a3=0 items=2 ppid=900 pid=917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:57:07.940000 audit: CWD cwd="/" Dec 13 01:57:07.940000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:07.940000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:07.940000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 01:57:10.843000 audit: BPF prog-id=12 op=LOAD Dec 13 01:57:10.843000 audit: BPF prog-id=3 op=UNLOAD Dec 13 01:57:10.843000 audit: BPF prog-id=13 op=LOAD Dec 13 01:57:10.843000 audit: BPF prog-id=14 op=LOAD Dec 13 01:57:10.843000 audit: BPF prog-id=4 op=UNLOAD Dec 13 01:57:10.843000 audit: BPF prog-id=5 op=UNLOAD Dec 13 01:57:10.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.853000 audit: BPF prog-id=12 op=UNLOAD Dec 13 01:57:10.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.959000 audit: BPF prog-id=15 op=LOAD Dec 13 01:57:10.959000 audit: BPF prog-id=16 op=LOAD Dec 13 01:57:10.959000 audit: BPF prog-id=17 op=LOAD Dec 13 01:57:10.959000 audit: BPF prog-id=13 op=UNLOAD Dec 13 01:57:10.959000 audit: BPF prog-id=14 op=UNLOAD Dec 13 01:57:10.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.978000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 01:57:10.978000 audit[998]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe98558b40 a2=4000 a3=7ffe98558bdc items=0 ppid=1 pid=998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:57:10.978000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 01:57:10.841720 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:57:07.937172 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:57:10.841730 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 01:57:07.937364 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:07Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 01:57:10.844507 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:57:07.937379 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:07Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 01:57:07.937403 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:07Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 01:57:07.937412 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:07Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 01:57:07.937436 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:07Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 01:57:07.937448 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:07Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 01:57:07.937619 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:07Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 01:57:07.937650 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:07Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 01:57:07.937661 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:07Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 01:57:07.938184 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:07Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 01:57:10.982175 systemd[1]: Started systemd-journald.service. Dec 13 01:57:07.938217 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:07Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 01:57:10.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:07.938232 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:07Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 01:57:07.938244 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:07Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 01:57:07.938258 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:07Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 01:57:07.938269 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:07Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 01:57:10.982594 systemd[1]: Mounted dev-hugepages.mount. Dec 13 01:57:10.578477 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:10Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:57:10.578753 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:10Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:57:10.578862 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:10Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:57:10.579036 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:10Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:57:10.579080 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:10Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 01:57:10.579144 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T01:57:10Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 01:57:10.983602 systemd[1]: Mounted dev-mqueue.mount. Dec 13 01:57:10.984472 systemd[1]: Mounted media.mount. Dec 13 01:57:10.985371 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 01:57:10.986460 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 01:57:10.987496 systemd[1]: Mounted tmp.mount. Dec 13 01:57:10.988516 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 01:57:10.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.989658 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:57:10.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.990760 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:57:10.990941 systemd[1]: Finished modprobe@configfs.service. Dec 13 01:57:10.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.992051 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:57:10.992258 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:57:10.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.993332 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:57:10.993490 systemd[1]: Finished modprobe@drm.service. Dec 13 01:57:10.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.994519 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:57:10.994716 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:57:10.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.995827 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:57:10.996014 systemd[1]: Finished modprobe@fuse.service. Dec 13 01:57:10.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.997177 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:57:10.997323 systemd[1]: Finished modprobe@loop.service. Dec 13 01:57:10.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.998562 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:57:10.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:10.999757 systemd[1]: Finished systemd-network-generator.service. Dec 13 01:57:11.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:11.000964 systemd[1]: Finished systemd-remount-fs.service. Dec 13 01:57:11.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:11.002404 systemd[1]: Reached target network-pre.target. Dec 13 01:57:11.004459 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 01:57:11.006362 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 01:57:11.007170 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:57:11.008762 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 01:57:11.010752 systemd[1]: Starting systemd-journal-flush.service... Dec 13 01:57:11.014869 systemd-journald[998]: Time spent on flushing to /var/log/journal/3dac2ff4d6b043c78a91178ff01c2c2c is 20.264ms for 1139 entries. Dec 13 01:57:11.014869 systemd-journald[998]: System Journal (/var/log/journal/3dac2ff4d6b043c78a91178ff01c2c2c) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:57:11.043868 systemd-journald[998]: Received client request to flush runtime journal. Dec 13 01:57:11.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:11.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:11.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:11.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:11.011842 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:57:11.012814 systemd[1]: Starting systemd-random-seed.service... Dec 13 01:57:11.013862 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:57:11.014842 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:57:11.017997 systemd[1]: Starting systemd-sysusers.service... Dec 13 01:57:11.020640 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 01:57:11.021734 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 01:57:11.024043 systemd[1]: Finished systemd-random-seed.service. Dec 13 01:57:11.025286 systemd[1]: Reached target first-boot-complete.target. Dec 13 01:57:11.032374 systemd[1]: Finished systemd-sysusers.service. Dec 13 01:57:11.034699 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 01:57:11.036362 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:57:11.039199 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:57:11.040964 systemd[1]: Starting systemd-udev-settle.service... Dec 13 01:57:11.044669 systemd[1]: Finished systemd-journal-flush.service. Dec 13 01:57:11.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:11.048151 udevadm[1023]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:57:11.051052 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 01:57:11.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:11.446996 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 01:57:11.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:11.448000 audit: BPF prog-id=18 op=LOAD Dec 13 01:57:11.448000 audit: BPF prog-id=19 op=LOAD Dec 13 01:57:11.448000 audit: BPF prog-id=7 op=UNLOAD Dec 13 01:57:11.448000 audit: BPF prog-id=8 op=UNLOAD Dec 13 01:57:11.449333 systemd[1]: Starting systemd-udevd.service... Dec 13 01:57:11.463920 systemd-udevd[1025]: Using default interface naming scheme 'v252'. Dec 13 01:57:11.475903 systemd[1]: Started systemd-udevd.service. Dec 13 01:57:11.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:11.477000 audit: BPF prog-id=20 op=LOAD Dec 13 01:57:11.479001 systemd[1]: Starting systemd-networkd.service... Dec 13 01:57:11.488000 audit: BPF prog-id=21 op=LOAD Dec 13 01:57:11.488000 audit: BPF prog-id=22 op=LOAD Dec 13 01:57:11.488000 audit: BPF prog-id=23 op=LOAD Dec 13 01:57:11.489287 systemd[1]: Starting systemd-userdbd.service... Dec 13 01:57:11.496498 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 01:57:11.517800 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:57:11.519398 systemd[1]: Started systemd-userdbd.service. Dec 13 01:57:11.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:11.536217 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:57:11.552186 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:57:11.561494 systemd-networkd[1035]: lo: Link UP Dec 13 01:57:11.561504 systemd-networkd[1035]: lo: Gained carrier Dec 13 01:57:11.561840 systemd-networkd[1035]: Enumeration completed Dec 13 01:57:11.561922 systemd[1]: Started systemd-networkd.service. Dec 13 01:57:11.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:11.563141 systemd-networkd[1035]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:57:11.564065 systemd-networkd[1035]: eth0: Link UP Dec 13 01:57:11.564076 systemd-networkd[1035]: eth0: Gained carrier Dec 13 01:57:11.551000 audit[1033]: AVC avc: denied { confidentiality } for pid=1033 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 01:57:11.551000 audit[1033]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55fded79d810 a1=337fc a2=7fc658be4bc5 a3=5 items=110 ppid=1025 pid=1033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:57:11.551000 audit: CWD cwd="/" Dec 13 01:57:11.551000 audit: PATH item=0 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=1 name=(null) inode=1830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=2 name=(null) inode=1830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=3 name=(null) inode=1831 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=4 name=(null) inode=1830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=5 name=(null) inode=1832 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=6 name=(null) inode=1830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=7 name=(null) inode=1833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=8 name=(null) inode=1833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=9 name=(null) inode=1834 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=10 name=(null) inode=1833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=11 name=(null) inode=1835 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=12 name=(null) inode=1833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=13 name=(null) inode=1836 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=14 name=(null) inode=1833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=15 name=(null) inode=1837 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=16 name=(null) inode=1833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=17 name=(null) inode=1838 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=18 name=(null) inode=1830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=19 name=(null) inode=1839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=20 name=(null) inode=1839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=21 name=(null) inode=1840 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=22 name=(null) inode=1839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=23 name=(null) inode=1841 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=24 name=(null) inode=1839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=25 name=(null) inode=1842 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=26 name=(null) inode=1839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=27 name=(null) inode=1843 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=28 name=(null) inode=1839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=29 name=(null) inode=1844 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=30 name=(null) inode=1830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=31 name=(null) inode=1845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=32 name=(null) inode=1845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=33 name=(null) inode=1846 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=34 name=(null) inode=1845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=35 name=(null) inode=1847 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=36 name=(null) inode=1845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=37 name=(null) inode=1848 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=38 name=(null) inode=1845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=39 name=(null) inode=1849 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=40 name=(null) inode=1845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=41 name=(null) inode=1850 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=42 name=(null) inode=1830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=43 name=(null) inode=1851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=44 name=(null) inode=1851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=45 name=(null) inode=1852 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=46 name=(null) inode=1851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=47 name=(null) inode=1853 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=48 name=(null) inode=1851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=49 name=(null) inode=1854 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=50 name=(null) inode=1851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=51 name=(null) inode=1855 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=52 name=(null) inode=1851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=53 name=(null) inode=1856 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=54 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=55 name=(null) inode=1857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=56 name=(null) inode=1857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=57 name=(null) inode=1858 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=58 name=(null) inode=1857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=59 name=(null) inode=1859 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=60 name=(null) inode=1857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=61 name=(null) inode=1860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=62 name=(null) inode=1860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=63 name=(null) inode=1861 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=64 name=(null) inode=1860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=65 name=(null) inode=1862 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=66 name=(null) inode=1860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=67 name=(null) inode=1863 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=68 name=(null) inode=1860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=69 name=(null) inode=1864 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=70 name=(null) inode=1860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=71 name=(null) inode=1865 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=72 name=(null) inode=1857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=73 name=(null) inode=1866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=74 name=(null) inode=1866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=75 name=(null) inode=1867 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=76 name=(null) inode=1866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=77 name=(null) inode=1868 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=78 name=(null) inode=1866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=79 name=(null) inode=1869 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=80 name=(null) inode=1866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=81 name=(null) inode=1870 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=82 name=(null) inode=1866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=83 name=(null) inode=1871 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=84 name=(null) inode=1857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=85 name=(null) inode=1872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=86 name=(null) inode=1872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=87 name=(null) inode=1873 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=88 name=(null) inode=1872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=89 name=(null) inode=1874 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=90 name=(null) inode=1872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=91 name=(null) inode=1875 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=92 name=(null) inode=1872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=93 name=(null) inode=1876 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=94 name=(null) inode=1872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=95 name=(null) inode=1877 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=96 name=(null) inode=1857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=97 name=(null) inode=1878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=98 name=(null) inode=1878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=99 name=(null) inode=1879 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=100 name=(null) inode=1878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=101 name=(null) inode=1880 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=102 name=(null) inode=1878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=103 name=(null) inode=1881 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=104 name=(null) inode=1878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=105 name=(null) inode=1882 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=106 name=(null) inode=1878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=107 name=(null) inode=1883 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PATH item=109 name=(null) inode=1884 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:11.551000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 01:57:11.577275 systemd-networkd[1035]: eth0: DHCPv4 address 10.0.0.123/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:57:11.584774 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 01:57:11.587607 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:57:11.587720 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:57:11.587830 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:57:11.591188 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:57:11.595186 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:57:11.644670 kernel: kvm: Nested Virtualization enabled Dec 13 01:57:11.644755 kernel: SVM: kvm: Nested Paging enabled Dec 13 01:57:11.644770 kernel: SVM: Virtual VMLOAD VMSAVE supported Dec 13 01:57:11.644863 kernel: SVM: Virtual GIF supported Dec 13 01:57:11.662192 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:57:11.687504 systemd[1]: Finished systemd-udev-settle.service. Dec 13 01:57:11.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:11.689535 systemd[1]: Starting lvm2-activation-early.service... Dec 13 01:57:11.696644 lvm[1062]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:57:11.728091 systemd[1]: Finished lvm2-activation-early.service. Dec 13 01:57:11.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:11.729210 systemd[1]: Reached target cryptsetup.target. Dec 13 01:57:11.731094 systemd[1]: Starting lvm2-activation.service... Dec 13 01:57:11.734476 lvm[1063]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:57:11.757933 systemd[1]: Finished lvm2-activation.service. Dec 13 01:57:11.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:11.763141 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:57:11.764068 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:57:11.764093 systemd[1]: Reached target local-fs.target. Dec 13 01:57:11.764931 systemd[1]: Reached target machines.target. Dec 13 01:57:11.766728 systemd[1]: Starting ldconfig.service... Dec 13 01:57:11.767768 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:57:11.767811 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:57:11.768640 systemd[1]: Starting systemd-boot-update.service... Dec 13 01:57:11.770416 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 01:57:11.772418 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 01:57:11.774858 systemd[1]: Starting systemd-sysext.service... Dec 13 01:57:11.775970 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1065 (bootctl) Dec 13 01:57:11.776801 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 01:57:11.780687 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 01:57:11.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:11.785740 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 01:57:11.790621 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 01:57:11.790822 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 01:57:11.799246 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 01:57:11.809313 systemd-fsck[1073]: fsck.fat 4.2 (2021-01-31) Dec 13 01:57:11.809313 systemd-fsck[1073]: /dev/vda1: 790 files, 119311/258078 clusters Dec 13 01:57:11.813284 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 01:57:11.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:11.816623 systemd[1]: Mounting boot.mount... Dec 13 01:57:12.320720 systemd[1]: Mounted boot.mount. Dec 13 01:57:12.331894 systemd[1]: Finished systemd-boot-update.service. Dec 13 01:57:12.339198 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:57:12.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.341537 kernel: kauditd_printk_skb: 222 callbacks suppressed Dec 13 01:57:12.341592 kernel: audit: type=1130 audit(1734055032.340:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.356181 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 01:57:12.368107 (sd-sysext)[1078]: Using extensions 'kubernetes'. Dec 13 01:57:12.368527 (sd-sysext)[1078]: Merged extensions into '/usr'. Dec 13 01:57:12.382880 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:12.384290 systemd[1]: Mounting usr-share-oem.mount... Dec 13 01:57:12.385329 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:57:12.386797 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:57:12.388628 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:57:12.401961 systemd[1]: Starting modprobe@loop.service... Dec 13 01:57:12.402761 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:57:12.402920 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:57:12.403052 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:12.405500 systemd[1]: Mounted usr-share-oem.mount. Dec 13 01:57:12.406529 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:57:12.406677 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:57:12.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.407833 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:57:12.407938 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:57:12.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.414464 kernel: audit: type=1130 audit(1734055032.407:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.415490 kernel: audit: type=1131 audit(1734055032.407:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.415776 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:57:12.415878 systemd[1]: Finished modprobe@loop.service. Dec 13 01:57:12.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.422551 kernel: audit: type=1130 audit(1734055032.415:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.422605 kernel: audit: type=1131 audit(1734055032.415:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.423815 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:57:12.423914 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:57:12.424788 systemd[1]: Finished systemd-sysext.service. Dec 13 01:57:12.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.427000 ldconfig[1064]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:57:12.427178 kernel: audit: type=1130 audit(1734055032.423:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.427199 kernel: audit: type=1131 audit(1734055032.423:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.431689 systemd[1]: Starting ensure-sysext.service... Dec 13 01:57:12.434180 kernel: audit: type=1130 audit(1734055032.430:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.435208 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 01:57:12.438919 systemd[1]: Reloading. Dec 13 01:57:12.443815 systemd-tmpfiles[1085]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 01:57:12.444770 systemd-tmpfiles[1085]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:57:12.446056 systemd-tmpfiles[1085]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:57:12.491584 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2024-12-13T01:57:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:57:12.491927 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2024-12-13T01:57:12Z" level=info msg="torcx already run" Dec 13 01:57:12.675897 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:57:12.675912 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:57:12.692550 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:57:12.741000 audit: BPF prog-id=24 op=LOAD Dec 13 01:57:12.741000 audit: BPF prog-id=21 op=UNLOAD Dec 13 01:57:12.743478 kernel: audit: type=1334 audit(1734055032.741:157): prog-id=24 op=LOAD Dec 13 01:57:12.743520 kernel: audit: type=1334 audit(1734055032.741:158): prog-id=21 op=UNLOAD Dec 13 01:57:12.742000 audit: BPF prog-id=25 op=LOAD Dec 13 01:57:12.743000 audit: BPF prog-id=26 op=LOAD Dec 13 01:57:12.743000 audit: BPF prog-id=22 op=UNLOAD Dec 13 01:57:12.743000 audit: BPF prog-id=23 op=UNLOAD Dec 13 01:57:12.744000 audit: BPF prog-id=27 op=LOAD Dec 13 01:57:12.744000 audit: BPF prog-id=15 op=UNLOAD Dec 13 01:57:12.744000 audit: BPF prog-id=28 op=LOAD Dec 13 01:57:12.744000 audit: BPF prog-id=29 op=LOAD Dec 13 01:57:12.744000 audit: BPF prog-id=16 op=UNLOAD Dec 13 01:57:12.744000 audit: BPF prog-id=17 op=UNLOAD Dec 13 01:57:12.745000 audit: BPF prog-id=30 op=LOAD Dec 13 01:57:12.745000 audit: BPF prog-id=31 op=LOAD Dec 13 01:57:12.745000 audit: BPF prog-id=18 op=UNLOAD Dec 13 01:57:12.745000 audit: BPF prog-id=19 op=UNLOAD Dec 13 01:57:12.746000 audit: BPF prog-id=32 op=LOAD Dec 13 01:57:12.746000 audit: BPF prog-id=20 op=UNLOAD Dec 13 01:57:12.748865 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 01:57:12.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.752715 systemd[1]: Starting audit-rules.service... Dec 13 01:57:12.754148 systemd[1]: Starting clean-ca-certificates.service... Dec 13 01:57:12.755895 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 01:57:12.757000 audit: BPF prog-id=33 op=LOAD Dec 13 01:57:12.758045 systemd[1]: Starting systemd-resolved.service... Dec 13 01:57:12.759000 audit: BPF prog-id=34 op=LOAD Dec 13 01:57:12.759997 systemd[1]: Starting systemd-timesyncd.service... Dec 13 01:57:12.761507 systemd[1]: Starting systemd-update-utmp.service... Dec 13 01:57:12.765434 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:12.765629 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:57:12.766596 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:57:12.768272 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:57:12.769822 systemd[1]: Starting modprobe@loop.service... Dec 13 01:57:12.770677 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:57:12.770773 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:57:12.770871 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:12.771685 systemd[1]: Finished clean-ca-certificates.service. Dec 13 01:57:12.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.773000 audit[1151]: SYSTEM_BOOT pid=1151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.773283 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:57:12.773374 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:57:12.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.774544 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:57:12.774635 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:57:12.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.775914 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:57:12.776014 systemd[1]: Finished modprobe@loop.service. Dec 13 01:57:12.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.778773 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:57:12.778927 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:57:12.779038 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:57:12.781714 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:12.781903 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:57:12.783320 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:57:12.786021 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:57:12.787589 systemd[1]: Starting modprobe@loop.service... Dec 13 01:57:12.788437 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:57:12.788540 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:57:12.788623 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:57:12.788685 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:12.789537 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 01:57:12.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.790929 systemd[1]: Finished systemd-update-utmp.service. Dec 13 01:57:12.791023 systemd-networkd[1035]: eth0: Gained IPv6LL Dec 13 01:57:12.792105 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:57:12.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:12.793000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 01:57:12.793000 audit[1172]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff8177b840 a2=420 a3=0 items=0 ppid=1146 pid=1172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:57:12.793000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 01:57:12.792205 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:57:12.794061 augenrules[1172]: No rules Dec 13 01:57:12.793427 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:57:12.793512 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:57:12.794689 systemd[1]: Finished audit-rules.service. Dec 13 01:57:12.795713 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:57:12.795799 systemd[1]: Finished modprobe@loop.service. Dec 13 01:57:12.798589 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:12.798777 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:57:12.799717 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:57:12.801168 systemd[1]: Starting modprobe@drm.service... Dec 13 01:57:12.802605 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:57:12.804134 systemd[1]: Starting modprobe@loop.service... Dec 13 01:57:12.804933 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:57:12.805067 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:57:12.806073 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 01:57:12.807181 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:57:12.807360 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:12.808482 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:57:12.808605 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:57:12.810034 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:57:12.810138 systemd[1]: Finished modprobe@drm.service. Dec 13 01:57:12.811562 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:57:12.811670 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:57:12.812934 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:57:12.813041 systemd[1]: Finished modprobe@loop.service. Dec 13 01:57:12.814227 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 01:57:12.815846 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:57:12.815933 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:57:12.816858 systemd[1]: Finished ensure-sysext.service. Dec 13 01:57:12.856972 systemd[1]: Started systemd-timesyncd.service. Dec 13 01:57:13.493372 systemd-timesyncd[1150]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:57:13.493410 systemd-timesyncd[1150]: Initial clock synchronization to Fri 2024-12-13 01:57:13.493319 UTC. Dec 13 01:57:13.493918 systemd-resolved[1149]: Positive Trust Anchors: Dec 13 01:57:13.493931 systemd-resolved[1149]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:57:13.493960 systemd-resolved[1149]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:57:13.494018 systemd[1]: Reached target time-set.target. Dec 13 01:57:13.500713 systemd-resolved[1149]: Defaulting to hostname 'linux'. Dec 13 01:57:13.501919 systemd[1]: Started systemd-resolved.service. Dec 13 01:57:13.502841 systemd[1]: Reached target network.target. Dec 13 01:57:13.503685 systemd[1]: Reached target network-online.target. Dec 13 01:57:13.504588 systemd[1]: Reached target nss-lookup.target. Dec 13 01:57:13.607537 systemd[1]: Finished ldconfig.service. Dec 13 01:57:13.609613 systemd[1]: Starting systemd-update-done.service... Dec 13 01:57:13.683401 systemd[1]: Finished systemd-update-done.service. Dec 13 01:57:13.703837 systemd[1]: Reached target sysinit.target. Dec 13 01:57:13.704761 systemd[1]: Started motdgen.path. Dec 13 01:57:13.705541 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 01:57:13.706818 systemd[1]: Started logrotate.timer. Dec 13 01:57:13.707696 systemd[1]: Started mdadm.timer. Dec 13 01:57:13.708433 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 01:57:13.709358 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:57:13.709384 systemd[1]: Reached target paths.target. Dec 13 01:57:13.710219 systemd[1]: Reached target timers.target. Dec 13 01:57:13.711338 systemd[1]: Listening on dbus.socket. Dec 13 01:57:13.713125 systemd[1]: Starting docker.socket... Dec 13 01:57:13.715743 systemd[1]: Listening on sshd.socket. Dec 13 01:57:13.716629 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:57:13.716950 systemd[1]: Listening on docker.socket. Dec 13 01:57:13.717813 systemd[1]: Reached target sockets.target. Dec 13 01:57:13.718637 systemd[1]: Reached target basic.target. Dec 13 01:57:13.719519 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:57:13.719542 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:57:13.720335 systemd[1]: Starting containerd.service... Dec 13 01:57:13.721893 systemd[1]: Starting dbus.service... Dec 13 01:57:13.723377 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 01:57:13.725179 systemd[1]: Starting extend-filesystems.service... Dec 13 01:57:13.726528 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 01:57:13.727260 jq[1190]: false Dec 13 01:57:13.727787 systemd[1]: Starting kubelet.service... Dec 13 01:57:13.729611 systemd[1]: Starting motdgen.service... Dec 13 01:57:13.731283 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 01:57:13.733614 systemd[1]: Starting sshd-keygen.service... Dec 13 01:57:13.735880 extend-filesystems[1191]: Found loop1 Dec 13 01:57:13.737086 extend-filesystems[1191]: Found sr0 Dec 13 01:57:13.737086 extend-filesystems[1191]: Found vda Dec 13 01:57:13.737086 extend-filesystems[1191]: Found vda1 Dec 13 01:57:13.737086 extend-filesystems[1191]: Found vda2 Dec 13 01:57:13.737086 extend-filesystems[1191]: Found vda3 Dec 13 01:57:13.737086 extend-filesystems[1191]: Found usr Dec 13 01:57:13.737086 extend-filesystems[1191]: Found vda4 Dec 13 01:57:13.737086 extend-filesystems[1191]: Found vda6 Dec 13 01:57:13.737086 extend-filesystems[1191]: Found vda7 Dec 13 01:57:13.737086 extend-filesystems[1191]: Found vda9 Dec 13 01:57:13.737086 extend-filesystems[1191]: Checking size of /dev/vda9 Dec 13 01:57:13.737376 dbus-daemon[1189]: [system] SELinux support is enabled Dec 13 01:57:13.737933 systemd[1]: Starting systemd-logind.service... Dec 13 01:57:13.739614 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:57:13.739691 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:57:13.759185 jq[1211]: true Dec 13 01:57:13.740002 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:57:13.741921 systemd[1]: Starting update-engine.service... Dec 13 01:57:13.744867 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 01:57:13.746455 systemd[1]: Started dbus.service. Dec 13 01:57:13.750596 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:57:13.750741 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 01:57:13.751419 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:57:13.751542 systemd[1]: Finished motdgen.service. Dec 13 01:57:13.752527 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:57:13.752655 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 01:57:13.759247 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:57:13.759268 systemd[1]: Reached target system-config.target. Dec 13 01:57:13.760326 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:57:13.760342 systemd[1]: Reached target user-config.target. Dec 13 01:57:13.761962 jq[1214]: true Dec 13 01:57:13.774160 update_engine[1209]: I1213 01:57:13.773356 1209 main.cc:92] Flatcar Update Engine starting Dec 13 01:57:13.777987 extend-filesystems[1191]: Resized partition /dev/vda9 Dec 13 01:57:13.784362 env[1215]: time="2024-12-13T01:57:13.783144992Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 01:57:13.804695 env[1215]: time="2024-12-13T01:57:13.804649108Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:57:13.804804 env[1215]: time="2024-12-13T01:57:13.804779563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:57:13.805747 env[1215]: time="2024-12-13T01:57:13.805708645Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:57:13.805823 env[1215]: time="2024-12-13T01:57:13.805804936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:57:13.806128 env[1215]: time="2024-12-13T01:57:13.806107333Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:57:13.806208 env[1215]: time="2024-12-13T01:57:13.806190008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:57:13.806289 env[1215]: time="2024-12-13T01:57:13.806269828Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 01:57:13.806363 env[1215]: time="2024-12-13T01:57:13.806344739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:57:13.806492 env[1215]: time="2024-12-13T01:57:13.806475223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:57:13.806778 env[1215]: time="2024-12-13T01:57:13.806761270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:57:13.806962 env[1215]: time="2024-12-13T01:57:13.806942640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:57:13.807047 env[1215]: time="2024-12-13T01:57:13.807029132Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:57:13.807155 env[1215]: time="2024-12-13T01:57:13.807136594Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 01:57:13.807233 env[1215]: time="2024-12-13T01:57:13.807214059Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:57:13.846457 extend-filesystems[1229]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 01:57:13.846696 systemd[1]: Started update-engine.service. Dec 13 01:57:13.848865 update_engine[1209]: I1213 01:57:13.847995 1209 update_check_scheduler.cc:74] Next update check in 3m24s Dec 13 01:57:13.850223 systemd[1]: Started locksmithd.service. Dec 13 01:57:13.854059 systemd-logind[1204]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:57:13.854081 systemd-logind[1204]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:57:13.855148 systemd-logind[1204]: New seat seat0. Dec 13 01:57:13.860409 systemd[1]: Started systemd-logind.service. Dec 13 01:57:13.875003 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:57:13.891555 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:57:13.897618 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 01:57:13.907991 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:57:13.921742 locksmithd[1245]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:57:13.929766 extend-filesystems[1229]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:57:13.929766 extend-filesystems[1229]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:57:13.929766 extend-filesystems[1229]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:57:13.935813 extend-filesystems[1191]: Resized filesystem in /dev/vda9 Dec 13 01:57:13.936869 bash[1243]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:57:13.936941 env[1215]: time="2024-12-13T01:57:13.930040266Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:57:13.936941 env[1215]: time="2024-12-13T01:57:13.930103004Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:57:13.936941 env[1215]: time="2024-12-13T01:57:13.930117771Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:57:13.936941 env[1215]: time="2024-12-13T01:57:13.930149070Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:57:13.936941 env[1215]: time="2024-12-13T01:57:13.930207450Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:57:13.936941 env[1215]: time="2024-12-13T01:57:13.930221426Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:57:13.936941 env[1215]: time="2024-12-13T01:57:13.930244369Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:57:13.936941 env[1215]: time="2024-12-13T01:57:13.930256942Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:57:13.936941 env[1215]: time="2024-12-13T01:57:13.930268384Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 01:57:13.936941 env[1215]: time="2024-12-13T01:57:13.930280156Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:57:13.936941 env[1215]: time="2024-12-13T01:57:13.930292930Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:57:13.936941 env[1215]: time="2024-12-13T01:57:13.930303780Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:57:13.936941 env[1215]: time="2024-12-13T01:57:13.930590929Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:57:13.936941 env[1215]: time="2024-12-13T01:57:13.930681679Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:57:13.930523 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:57:13.937297 env[1215]: time="2024-12-13T01:57:13.930954140Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:57:13.937297 env[1215]: time="2024-12-13T01:57:13.930995898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:57:13.937297 env[1215]: time="2024-12-13T01:57:13.931008753Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:57:13.937297 env[1215]: time="2024-12-13T01:57:13.931070148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:57:13.937297 env[1215]: time="2024-12-13T01:57:13.931081790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:57:13.937297 env[1215]: time="2024-12-13T01:57:13.931092810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:57:13.937297 env[1215]: time="2024-12-13T01:57:13.931102879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:57:13.937297 env[1215]: time="2024-12-13T01:57:13.931113719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:57:13.937297 env[1215]: time="2024-12-13T01:57:13.931143165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:57:13.937297 env[1215]: time="2024-12-13T01:57:13.931154396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:57:13.937297 env[1215]: time="2024-12-13T01:57:13.931165206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:57:13.937297 env[1215]: time="2024-12-13T01:57:13.931177088Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:57:13.937297 env[1215]: time="2024-12-13T01:57:13.931334293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:57:13.937297 env[1215]: time="2024-12-13T01:57:13.931364720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:57:13.937297 env[1215]: time="2024-12-13T01:57:13.931376783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:57:13.930698 systemd[1]: Finished extend-filesystems.service. Dec 13 01:57:13.937613 env[1215]: time="2024-12-13T01:57:13.931392152Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:57:13.937613 env[1215]: time="2024-12-13T01:57:13.931408031Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 01:57:13.937613 env[1215]: time="2024-12-13T01:57:13.931438368Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:57:13.937613 env[1215]: time="2024-12-13T01:57:13.931459448Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 01:57:13.937613 env[1215]: time="2024-12-13T01:57:13.931491187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:57:13.934029 systemd[1]: Started containerd.service. Dec 13 01:57:13.937752 env[1215]: time="2024-12-13T01:57:13.931788475Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:57:13.937752 env[1215]: time="2024-12-13T01:57:13.931849940Z" level=info msg="Connect containerd service" Dec 13 01:57:13.937752 env[1215]: time="2024-12-13T01:57:13.931881419Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:57:13.937752 env[1215]: time="2024-12-13T01:57:13.932413668Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:57:13.937752 env[1215]: time="2024-12-13T01:57:13.932486514Z" level=info msg="Start subscribing containerd event" Dec 13 01:57:13.937752 env[1215]: time="2024-12-13T01:57:13.932530527Z" level=info msg="Start recovering state" Dec 13 01:57:13.937752 env[1215]: time="2024-12-13T01:57:13.932582635Z" level=info msg="Start event monitor" Dec 13 01:57:13.937752 env[1215]: time="2024-12-13T01:57:13.932592102Z" level=info msg="Start snapshots syncer" Dec 13 01:57:13.937752 env[1215]: time="2024-12-13T01:57:13.932598965Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:57:13.937752 env[1215]: time="2024-12-13T01:57:13.932604906Z" level=info msg="Start streaming server" Dec 13 01:57:13.937752 env[1215]: time="2024-12-13T01:57:13.932882267Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:57:13.937752 env[1215]: time="2024-12-13T01:57:13.932914948Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:57:13.937752 env[1215]: time="2024-12-13T01:57:13.932960934Z" level=info msg="containerd successfully booted in 0.150448s" Dec 13 01:57:13.938445 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 01:57:14.349361 sshd_keygen[1210]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:57:14.369937 systemd[1]: Finished sshd-keygen.service. Dec 13 01:57:14.372433 systemd[1]: Starting issuegen.service... Dec 13 01:57:14.378165 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:57:14.378310 systemd[1]: Finished issuegen.service. Dec 13 01:57:14.380511 systemd[1]: Starting systemd-user-sessions.service... Dec 13 01:57:14.386054 systemd[1]: Finished systemd-user-sessions.service. Dec 13 01:57:14.388397 systemd[1]: Started getty@tty1.service. Dec 13 01:57:14.390424 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 01:57:14.391513 systemd[1]: Reached target getty.target. Dec 13 01:57:14.454816 systemd[1]: Started kubelet.service. Dec 13 01:57:14.461188 systemd[1]: Reached target multi-user.target. Dec 13 01:57:14.463156 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 01:57:14.469334 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 01:57:14.469526 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 01:57:14.470709 systemd[1]: Startup finished in 627ms (kernel) + 4.909s (initrd) + 6.261s (userspace) = 11.798s. Dec 13 01:57:14.840719 kubelet[1266]: E1213 01:57:14.840561 1266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:57:14.842230 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:57:14.842371 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:57:22.245591 systemd[1]: Created slice system-sshd.slice. Dec 13 01:57:22.246812 systemd[1]: Started sshd@0-10.0.0.123:22-10.0.0.1:55230.service. Dec 13 01:57:22.280193 sshd[1275]: Accepted publickey for core from 10.0.0.1 port 55230 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:57:22.281698 sshd[1275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:57:22.289228 systemd[1]: Created slice user-500.slice. Dec 13 01:57:22.290244 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 01:57:22.291993 systemd-logind[1204]: New session 1 of user core. Dec 13 01:57:22.297899 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 01:57:22.299218 systemd[1]: Starting user@500.service... Dec 13 01:57:22.301779 (systemd)[1278]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:57:22.367378 systemd[1278]: Queued start job for default target default.target. Dec 13 01:57:22.367804 systemd[1278]: Reached target paths.target. Dec 13 01:57:22.367824 systemd[1278]: Reached target sockets.target. Dec 13 01:57:22.367836 systemd[1278]: Reached target timers.target. Dec 13 01:57:22.367846 systemd[1278]: Reached target basic.target. Dec 13 01:57:22.367880 systemd[1278]: Reached target default.target. Dec 13 01:57:22.367901 systemd[1278]: Startup finished in 61ms. Dec 13 01:57:22.368021 systemd[1]: Started user@500.service. Dec 13 01:57:22.369030 systemd[1]: Started session-1.scope. Dec 13 01:57:22.419741 systemd[1]: Started sshd@1-10.0.0.123:22-10.0.0.1:55242.service. Dec 13 01:57:22.451877 sshd[1287]: Accepted publickey for core from 10.0.0.1 port 55242 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:57:22.453236 sshd[1287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:57:22.456822 systemd-logind[1204]: New session 2 of user core. Dec 13 01:57:22.457995 systemd[1]: Started session-2.scope. Dec 13 01:57:22.510451 sshd[1287]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:22.513128 systemd[1]: sshd@1-10.0.0.123:22-10.0.0.1:55242.service: Deactivated successfully. Dec 13 01:57:22.513638 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:57:22.514225 systemd-logind[1204]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:57:22.515304 systemd[1]: Started sshd@2-10.0.0.123:22-10.0.0.1:55258.service. Dec 13 01:57:22.516142 systemd-logind[1204]: Removed session 2. Dec 13 01:57:22.545266 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 55258 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:57:22.546456 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:57:22.549791 systemd-logind[1204]: New session 3 of user core. Dec 13 01:57:22.550526 systemd[1]: Started session-3.scope. Dec 13 01:57:22.599645 sshd[1293]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:22.602010 systemd[1]: sshd@2-10.0.0.123:22-10.0.0.1:55258.service: Deactivated successfully. Dec 13 01:57:22.602476 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:57:22.602926 systemd-logind[1204]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:57:22.603763 systemd[1]: Started sshd@3-10.0.0.123:22-10.0.0.1:55264.service. Dec 13 01:57:22.604425 systemd-logind[1204]: Removed session 3. Dec 13 01:57:22.635298 sshd[1299]: Accepted publickey for core from 10.0.0.1 port 55264 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:57:22.636478 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:57:22.639743 systemd-logind[1204]: New session 4 of user core. Dec 13 01:57:22.640467 systemd[1]: Started session-4.scope. Dec 13 01:57:22.692359 sshd[1299]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:22.695256 systemd[1]: sshd@3-10.0.0.123:22-10.0.0.1:55264.service: Deactivated successfully. Dec 13 01:57:22.695878 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:57:22.696413 systemd-logind[1204]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:57:22.697532 systemd[1]: Started sshd@4-10.0.0.123:22-10.0.0.1:55268.service. Dec 13 01:57:22.698295 systemd-logind[1204]: Removed session 4. Dec 13 01:57:22.727258 sshd[1305]: Accepted publickey for core from 10.0.0.1 port 55268 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:57:22.728277 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:57:22.731353 systemd-logind[1204]: New session 5 of user core. Dec 13 01:57:22.732063 systemd[1]: Started session-5.scope. Dec 13 01:57:22.785912 sudo[1308]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:57:22.786100 sudo[1308]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 01:57:22.797336 systemd[1]: Starting coreos-metadata.service... Dec 13 01:57:22.803305 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:57:22.803482 systemd[1]: Finished coreos-metadata.service. Dec 13 01:57:23.318242 systemd[1]: Stopped kubelet.service. Dec 13 01:57:23.320448 systemd[1]: Starting kubelet.service... Dec 13 01:57:23.340233 systemd[1]: Reloading. Dec 13 01:57:23.412624 /usr/lib/systemd/system-generators/torcx-generator[1366]: time="2024-12-13T01:57:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:57:23.412656 /usr/lib/systemd/system-generators/torcx-generator[1366]: time="2024-12-13T01:57:23Z" level=info msg="torcx already run" Dec 13 01:57:24.223500 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:57:24.223520 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:57:24.240431 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:57:24.312754 systemd[1]: Started kubelet.service. Dec 13 01:57:24.313790 systemd[1]: Stopping kubelet.service... Dec 13 01:57:24.314030 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:57:24.314164 systemd[1]: Stopped kubelet.service. Dec 13 01:57:24.315312 systemd[1]: Starting kubelet.service... Dec 13 01:57:24.387887 systemd[1]: Started kubelet.service. Dec 13 01:57:24.423562 kubelet[1414]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:57:24.423562 kubelet[1414]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:57:24.423562 kubelet[1414]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:57:24.424535 kubelet[1414]: I1213 01:57:24.424489 1414 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:57:24.575719 kubelet[1414]: I1213 01:57:24.575580 1414 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:57:24.575719 kubelet[1414]: I1213 01:57:24.575613 1414 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:57:24.575901 kubelet[1414]: I1213 01:57:24.575876 1414 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:57:24.597721 kubelet[1414]: I1213 01:57:24.597682 1414 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:57:24.617299 kubelet[1414]: E1213 01:57:24.617266 1414 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:57:24.617483 kubelet[1414]: I1213 01:57:24.617451 1414 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:57:24.622249 kubelet[1414]: I1213 01:57:24.622229 1414 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:57:24.623157 kubelet[1414]: I1213 01:57:24.623130 1414 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:57:24.623275 kubelet[1414]: I1213 01:57:24.623235 1414 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:57:24.623439 kubelet[1414]: I1213 01:57:24.623264 1414 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.123","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:57:24.623439 kubelet[1414]: I1213 01:57:24.623434 1414 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:57:24.623439 kubelet[1414]: I1213 01:57:24.623443 1414 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:57:24.623636 kubelet[1414]: I1213 01:57:24.623530 1414 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:57:24.626444 kubelet[1414]: I1213 01:57:24.626416 1414 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:57:24.626444 kubelet[1414]: I1213 01:57:24.626436 1414 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:57:24.626576 kubelet[1414]: I1213 01:57:24.626461 1414 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:57:24.626576 kubelet[1414]: I1213 01:57:24.626474 1414 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:57:24.626912 kubelet[1414]: E1213 01:57:24.626879 1414 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:24.626912 kubelet[1414]: E1213 01:57:24.626917 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:24.635303 kubelet[1414]: I1213 01:57:24.635280 1414 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 01:57:24.636645 kubelet[1414]: I1213 01:57:24.636633 1414 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:57:24.638770 kubelet[1414]: W1213 01:57:24.638751 1414 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:57:24.639310 kubelet[1414]: I1213 01:57:24.639292 1414 server.go:1269] "Started kubelet" Dec 13 01:57:24.639403 kubelet[1414]: I1213 01:57:24.639379 1414 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:57:24.639560 kubelet[1414]: I1213 01:57:24.639516 1414 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:57:24.639797 kubelet[1414]: I1213 01:57:24.639781 1414 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:57:24.640947 kubelet[1414]: I1213 01:57:24.640755 1414 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:57:24.642179 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 01:57:24.642288 kubelet[1414]: I1213 01:57:24.642271 1414 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:57:24.642589 kubelet[1414]: I1213 01:57:24.642465 1414 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:57:24.642785 kubelet[1414]: I1213 01:57:24.642745 1414 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:57:24.642942 kubelet[1414]: I1213 01:57:24.642834 1414 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:57:24.642942 kubelet[1414]: I1213 01:57:24.642869 1414 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:57:24.643223 kubelet[1414]: E1213 01:57:24.643189 1414 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.123\" not found" Dec 13 01:57:24.643633 kubelet[1414]: I1213 01:57:24.643615 1414 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:57:24.643695 kubelet[1414]: I1213 01:57:24.643675 1414 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:57:24.644787 kubelet[1414]: E1213 01:57:24.644749 1414 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:57:24.645207 kubelet[1414]: I1213 01:57:24.645190 1414 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:57:24.654831 kubelet[1414]: E1213 01:57:24.654814 1414 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.123\" not found" node="10.0.0.123" Dec 13 01:57:24.655138 kubelet[1414]: I1213 01:57:24.655126 1414 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:57:24.655217 kubelet[1414]: I1213 01:57:24.655201 1414 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:57:24.655296 kubelet[1414]: I1213 01:57:24.655283 1414 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:57:24.743937 kubelet[1414]: E1213 01:57:24.743867 1414 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.123\" not found" Dec 13 01:57:24.844656 kubelet[1414]: E1213 01:57:24.844519 1414 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.123\" not found" Dec 13 01:57:24.944703 kubelet[1414]: E1213 01:57:24.944643 1414 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.123\" not found" Dec 13 01:57:25.002112 kubelet[1414]: I1213 01:57:25.002054 1414 policy_none.go:49] "None policy: Start" Dec 13 01:57:25.003025 kubelet[1414]: I1213 01:57:25.002964 1414 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:57:25.003025 kubelet[1414]: I1213 01:57:25.003025 1414 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:57:25.009258 systemd[1]: Created slice kubepods.slice. Dec 13 01:57:25.012843 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 01:57:25.020767 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 01:57:25.021627 kubelet[1414]: I1213 01:57:25.021598 1414 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:57:25.021821 kubelet[1414]: I1213 01:57:25.021806 1414 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:57:25.021877 kubelet[1414]: I1213 01:57:25.021824 1414 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:57:25.022622 kubelet[1414]: I1213 01:57:25.022595 1414 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:57:25.023027 kubelet[1414]: E1213 01:57:25.022997 1414 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.123\" not found" Dec 13 01:57:25.038509 kubelet[1414]: E1213 01:57:25.038458 1414 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.123" not found Dec 13 01:57:25.077349 kubelet[1414]: I1213 01:57:25.077284 1414 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:57:25.078614 kubelet[1414]: I1213 01:57:25.078585 1414 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:57:25.078750 kubelet[1414]: I1213 01:57:25.078621 1414 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:57:25.078750 kubelet[1414]: I1213 01:57:25.078647 1414 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:57:25.078750 kubelet[1414]: E1213 01:57:25.078687 1414 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 01:57:25.123135 kubelet[1414]: I1213 01:57:25.122650 1414 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.123" Dec 13 01:57:25.127024 kubelet[1414]: I1213 01:57:25.126984 1414 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.123" Dec 13 01:57:25.133954 kubelet[1414]: I1213 01:57:25.133932 1414 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 01:57:25.134341 env[1215]: time="2024-12-13T01:57:25.134258888Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:57:25.134604 kubelet[1414]: I1213 01:57:25.134457 1414 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 01:57:25.410809 sudo[1308]: pam_unix(sudo:session): session closed for user root Dec 13 01:57:25.412265 sshd[1305]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:25.414511 systemd[1]: sshd@4-10.0.0.123:22-10.0.0.1:55268.service: Deactivated successfully. Dec 13 01:57:25.415402 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:57:25.416029 systemd-logind[1204]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:57:25.416794 systemd-logind[1204]: Removed session 5. Dec 13 01:57:25.577549 kubelet[1414]: I1213 01:57:25.577479 1414 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 01:57:25.577950 kubelet[1414]: W1213 01:57:25.577757 1414 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:57:25.577950 kubelet[1414]: W1213 01:57:25.577789 1414 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:57:25.577950 kubelet[1414]: W1213 01:57:25.577760 1414 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:57:25.627220 kubelet[1414]: E1213 01:57:25.627163 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:25.627220 kubelet[1414]: I1213 01:57:25.627211 1414 apiserver.go:52] "Watching apiserver" Dec 13 01:57:25.635563 systemd[1]: Created slice kubepods-burstable-pod5dc25ab5_ed19_4c39_a96a_c64c0bbd1e07.slice. Dec 13 01:57:25.643475 kubelet[1414]: I1213 01:57:25.643443 1414 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:57:25.648308 systemd[1]: Created slice kubepods-besteffort-pod2f4f4474_6843_47ca_af01_fa920dd4f00d.slice. Dec 13 01:57:25.648842 kubelet[1414]: I1213 01:57:25.648752 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-xtables-lock\") pod \"cilium-qxvj7\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " pod="kube-system/cilium-qxvj7" Dec 13 01:57:25.648990 kubelet[1414]: I1213 01:57:25.648849 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-cilium-config-path\") pod \"cilium-qxvj7\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " pod="kube-system/cilium-qxvj7" Dec 13 01:57:25.648990 kubelet[1414]: I1213 01:57:25.648880 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bd6f\" (UniqueName: \"kubernetes.io/projected/2f4f4474-6843-47ca-af01-fa920dd4f00d-kube-api-access-6bd6f\") pod \"kube-proxy-wzljh\" (UID: \"2f4f4474-6843-47ca-af01-fa920dd4f00d\") " pod="kube-system/kube-proxy-wzljh" Dec 13 01:57:25.648990 kubelet[1414]: I1213 01:57:25.648903 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-bpf-maps\") pod \"cilium-qxvj7\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " pod="kube-system/cilium-qxvj7" Dec 13 01:57:25.648990 kubelet[1414]: I1213 01:57:25.648925 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-etc-cni-netd\") pod \"cilium-qxvj7\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " pod="kube-system/cilium-qxvj7" Dec 13 01:57:25.648990 kubelet[1414]: I1213 01:57:25.648937 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-lib-modules\") pod \"cilium-qxvj7\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " pod="kube-system/cilium-qxvj7" Dec 13 01:57:25.648990 kubelet[1414]: I1213 01:57:25.648953 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-cni-path\") pod \"cilium-qxvj7\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " pod="kube-system/cilium-qxvj7" Dec 13 01:57:25.649160 kubelet[1414]: I1213 01:57:25.649005 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-hubble-tls\") pod \"cilium-qxvj7\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " pod="kube-system/cilium-qxvj7" Dec 13 01:57:25.649160 kubelet[1414]: I1213 01:57:25.649025 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmhl2\" (UniqueName: \"kubernetes.io/projected/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-kube-api-access-zmhl2\") pod \"cilium-qxvj7\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " pod="kube-system/cilium-qxvj7" Dec 13 01:57:25.649160 kubelet[1414]: I1213 01:57:25.649036 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2f4f4474-6843-47ca-af01-fa920dd4f00d-kube-proxy\") pod \"kube-proxy-wzljh\" (UID: \"2f4f4474-6843-47ca-af01-fa920dd4f00d\") " pod="kube-system/kube-proxy-wzljh" Dec 13 01:57:25.649160 kubelet[1414]: I1213 01:57:25.649049 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-clustermesh-secrets\") pod \"cilium-qxvj7\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " pod="kube-system/cilium-qxvj7" Dec 13 01:57:25.649160 kubelet[1414]: I1213 01:57:25.649061 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-host-proc-sys-net\") pod \"cilium-qxvj7\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " pod="kube-system/cilium-qxvj7" Dec 13 01:57:25.649274 kubelet[1414]: I1213 01:57:25.649074 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-host-proc-sys-kernel\") pod \"cilium-qxvj7\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " pod="kube-system/cilium-qxvj7" Dec 13 01:57:25.649274 kubelet[1414]: I1213 01:57:25.649090 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f4f4474-6843-47ca-af01-fa920dd4f00d-xtables-lock\") pod \"kube-proxy-wzljh\" (UID: \"2f4f4474-6843-47ca-af01-fa920dd4f00d\") " pod="kube-system/kube-proxy-wzljh" Dec 13 01:57:25.649274 kubelet[1414]: I1213 01:57:25.649102 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f4f4474-6843-47ca-af01-fa920dd4f00d-lib-modules\") pod \"kube-proxy-wzljh\" (UID: \"2f4f4474-6843-47ca-af01-fa920dd4f00d\") " pod="kube-system/kube-proxy-wzljh" Dec 13 01:57:25.649274 kubelet[1414]: I1213 01:57:25.649120 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-cilium-run\") pod \"cilium-qxvj7\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " pod="kube-system/cilium-qxvj7" Dec 13 01:57:25.649274 kubelet[1414]: I1213 01:57:25.649137 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-hostproc\") pod \"cilium-qxvj7\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " pod="kube-system/cilium-qxvj7" Dec 13 01:57:25.649274 kubelet[1414]: I1213 01:57:25.649164 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-cilium-cgroup\") pod \"cilium-qxvj7\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " pod="kube-system/cilium-qxvj7" Dec 13 01:57:25.751110 kubelet[1414]: I1213 01:57:25.750987 1414 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 01:57:25.946469 kubelet[1414]: E1213 01:57:25.946420 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:25.947269 env[1215]: time="2024-12-13T01:57:25.947207537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qxvj7,Uid:5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07,Namespace:kube-system,Attempt:0,}" Dec 13 01:57:25.957412 kubelet[1414]: E1213 01:57:25.957387 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:25.957938 env[1215]: time="2024-12-13T01:57:25.957889373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wzljh,Uid:2f4f4474-6843-47ca-af01-fa920dd4f00d,Namespace:kube-system,Attempt:0,}" Dec 13 01:57:26.627506 kubelet[1414]: E1213 01:57:26.627435 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:26.776559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3105337467.mount: Deactivated successfully. Dec 13 01:57:26.783592 env[1215]: time="2024-12-13T01:57:26.783538875Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:26.785278 env[1215]: time="2024-12-13T01:57:26.785236850Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:26.788396 env[1215]: time="2024-12-13T01:57:26.788341454Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:26.790551 env[1215]: time="2024-12-13T01:57:26.790515361Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:26.791891 env[1215]: time="2024-12-13T01:57:26.791851988Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:26.793368 env[1215]: time="2024-12-13T01:57:26.793332075Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:26.794834 env[1215]: time="2024-12-13T01:57:26.794800509Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:26.796116 env[1215]: time="2024-12-13T01:57:26.796094857Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:26.820400 env[1215]: time="2024-12-13T01:57:26.820319525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:26.820400 env[1215]: time="2024-12-13T01:57:26.820366503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:26.820400 env[1215]: time="2024-12-13T01:57:26.820381001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:26.820687 env[1215]: time="2024-12-13T01:57:26.820644785Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/befa124d49b789f65c267beeb15a9fd4084c1ad0a1e177101d46716cfb60f0fc pid=1470 runtime=io.containerd.runc.v2 Dec 13 01:57:26.824041 env[1215]: time="2024-12-13T01:57:26.823964222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:26.824041 env[1215]: time="2024-12-13T01:57:26.824019165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:26.824041 env[1215]: time="2024-12-13T01:57:26.824033281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:26.824495 env[1215]: time="2024-12-13T01:57:26.824300432Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852 pid=1483 runtime=io.containerd.runc.v2 Dec 13 01:57:26.839578 systemd[1]: Started cri-containerd-befa124d49b789f65c267beeb15a9fd4084c1ad0a1e177101d46716cfb60f0fc.scope. Dec 13 01:57:26.854050 systemd[1]: Started cri-containerd-f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852.scope. Dec 13 01:57:26.890320 env[1215]: time="2024-12-13T01:57:26.890209068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wzljh,Uid:2f4f4474-6843-47ca-af01-fa920dd4f00d,Namespace:kube-system,Attempt:0,} returns sandbox id \"befa124d49b789f65c267beeb15a9fd4084c1ad0a1e177101d46716cfb60f0fc\"" Dec 13 01:57:26.891571 kubelet[1414]: E1213 01:57:26.891160 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:26.892508 env[1215]: time="2024-12-13T01:57:26.892482012Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 01:57:26.894331 env[1215]: time="2024-12-13T01:57:26.894307906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qxvj7,Uid:5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07,Namespace:kube-system,Attempt:0,} returns sandbox id \"f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852\"" Dec 13 01:57:26.894856 kubelet[1414]: E1213 01:57:26.894722 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:27.628555 kubelet[1414]: E1213 01:57:27.628481 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:28.234382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3621564509.mount: Deactivated successfully. Dec 13 01:57:28.628989 kubelet[1414]: E1213 01:57:28.628848 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:29.247106 env[1215]: time="2024-12-13T01:57:29.247058374Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:29.303966 env[1215]: time="2024-12-13T01:57:29.303896436Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:29.556621 env[1215]: time="2024-12-13T01:57:29.556502963Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:29.560084 env[1215]: time="2024-12-13T01:57:29.560055647Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:29.560468 env[1215]: time="2024-12-13T01:57:29.560444577Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 01:57:29.561672 env[1215]: time="2024-12-13T01:57:29.561338283Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:57:29.562665 env[1215]: time="2024-12-13T01:57:29.562626600Z" level=info msg="CreateContainer within sandbox \"befa124d49b789f65c267beeb15a9fd4084c1ad0a1e177101d46716cfb60f0fc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:57:29.579039 env[1215]: time="2024-12-13T01:57:29.579011533Z" level=info msg="CreateContainer within sandbox \"befa124d49b789f65c267beeb15a9fd4084c1ad0a1e177101d46716cfb60f0fc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9832176925cebafe805a49f8835796008304a9f0a0b31a15412ffab17be8a7d9\"" Dec 13 01:57:29.579584 env[1215]: time="2024-12-13T01:57:29.579549151Z" level=info msg="StartContainer for \"9832176925cebafe805a49f8835796008304a9f0a0b31a15412ffab17be8a7d9\"" Dec 13 01:57:29.597542 systemd[1]: Started cri-containerd-9832176925cebafe805a49f8835796008304a9f0a0b31a15412ffab17be8a7d9.scope. Dec 13 01:57:29.621434 env[1215]: time="2024-12-13T01:57:29.621395734Z" level=info msg="StartContainer for \"9832176925cebafe805a49f8835796008304a9f0a0b31a15412ffab17be8a7d9\" returns successfully" Dec 13 01:57:29.629082 kubelet[1414]: E1213 01:57:29.629039 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:30.088923 kubelet[1414]: E1213 01:57:30.088890 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:30.097512 kubelet[1414]: I1213 01:57:30.097421 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wzljh" podStartSLOduration=2.428302958 podStartE2EDuration="5.097398839s" podCreationTimestamp="2024-12-13 01:57:25 +0000 UTC" firstStartedPulling="2024-12-13 01:57:26.892115835 +0000 UTC m=+2.498313363" lastFinishedPulling="2024-12-13 01:57:29.561211726 +0000 UTC m=+5.167409244" observedRunningTime="2024-12-13 01:57:30.096942804 +0000 UTC m=+5.703140342" watchObservedRunningTime="2024-12-13 01:57:30.097398839 +0000 UTC m=+5.703596398" Dec 13 01:57:30.572545 systemd[1]: run-containerd-runc-k8s.io-9832176925cebafe805a49f8835796008304a9f0a0b31a15412ffab17be8a7d9-runc.g2icto.mount: Deactivated successfully. Dec 13 01:57:30.629745 kubelet[1414]: E1213 01:57:30.629724 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:31.090002 kubelet[1414]: E1213 01:57:31.089939 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:31.630768 kubelet[1414]: E1213 01:57:31.630741 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:32.631266 kubelet[1414]: E1213 01:57:32.631212 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:33.631776 kubelet[1414]: E1213 01:57:33.631738 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:34.632242 kubelet[1414]: E1213 01:57:34.632203 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:35.632738 kubelet[1414]: E1213 01:57:35.632702 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:36.461934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4127542355.mount: Deactivated successfully. Dec 13 01:57:36.633044 kubelet[1414]: E1213 01:57:36.632989 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:37.633769 kubelet[1414]: E1213 01:57:37.633720 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:38.634464 kubelet[1414]: E1213 01:57:38.634413 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:39.634774 kubelet[1414]: E1213 01:57:39.634712 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:40.635581 kubelet[1414]: E1213 01:57:40.635534 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:41.072162 env[1215]: time="2024-12-13T01:57:41.072096275Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:41.073814 env[1215]: time="2024-12-13T01:57:41.073750848Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:41.075536 env[1215]: time="2024-12-13T01:57:41.075502825Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:41.076148 env[1215]: time="2024-12-13T01:57:41.076107769Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:57:41.078072 env[1215]: time="2024-12-13T01:57:41.078038100Z" level=info msg="CreateContainer within sandbox \"f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:57:41.090018 env[1215]: time="2024-12-13T01:57:41.089965002Z" level=info msg="CreateContainer within sandbox \"f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09\"" Dec 13 01:57:41.090561 env[1215]: time="2024-12-13T01:57:41.090520463Z" level=info msg="StartContainer for \"8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09\"" Dec 13 01:57:41.122159 systemd[1]: Started cri-containerd-8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09.scope. Dec 13 01:57:41.215618 env[1215]: time="2024-12-13T01:57:41.215571604Z" level=info msg="StartContainer for \"8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09\" returns successfully" Dec 13 01:57:41.218047 systemd[1]: cri-containerd-8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09.scope: Deactivated successfully. Dec 13 01:57:41.635996 kubelet[1414]: E1213 01:57:41.635922 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:42.085932 systemd[1]: run-containerd-runc-k8s.io-8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09-runc.SGDG5U.mount: Deactivated successfully. Dec 13 01:57:42.086028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09-rootfs.mount: Deactivated successfully. Dec 13 01:57:42.111713 kubelet[1414]: E1213 01:57:42.111688 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:42.133054 env[1215]: time="2024-12-13T01:57:42.132997953Z" level=info msg="shim disconnected" id=8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09 Dec 13 01:57:42.133054 env[1215]: time="2024-12-13T01:57:42.133042667Z" level=warning msg="cleaning up after shim disconnected" id=8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09 namespace=k8s.io Dec 13 01:57:42.133054 env[1215]: time="2024-12-13T01:57:42.133051734Z" level=info msg="cleaning up dead shim" Dec 13 01:57:42.141128 env[1215]: time="2024-12-13T01:57:42.141078730Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:57:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1767 runtime=io.containerd.runc.v2\n" Dec 13 01:57:42.636786 kubelet[1414]: E1213 01:57:42.636741 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:43.114462 kubelet[1414]: E1213 01:57:43.114432 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:43.116149 env[1215]: time="2024-12-13T01:57:43.116104149Z" level=info msg="CreateContainer within sandbox \"f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:57:43.132313 env[1215]: time="2024-12-13T01:57:43.132260915Z" level=info msg="CreateContainer within sandbox \"f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"714567f9d6627484b725f72f11cfa24a765c629a1df59541d2d26259d08504d8\"" Dec 13 01:57:43.132846 env[1215]: time="2024-12-13T01:57:43.132788815Z" level=info msg="StartContainer for \"714567f9d6627484b725f72f11cfa24a765c629a1df59541d2d26259d08504d8\"" Dec 13 01:57:43.150705 systemd[1]: Started cri-containerd-714567f9d6627484b725f72f11cfa24a765c629a1df59541d2d26259d08504d8.scope. Dec 13 01:57:43.181068 env[1215]: time="2024-12-13T01:57:43.181014733Z" level=info msg="StartContainer for \"714567f9d6627484b725f72f11cfa24a765c629a1df59541d2d26259d08504d8\" returns successfully" Dec 13 01:57:43.191234 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:57:43.191437 systemd[1]: Stopped systemd-sysctl.service. Dec 13 01:57:43.191620 systemd[1]: Stopping systemd-sysctl.service... Dec 13 01:57:43.192943 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:57:43.195550 systemd[1]: cri-containerd-714567f9d6627484b725f72f11cfa24a765c629a1df59541d2d26259d08504d8.scope: Deactivated successfully. Dec 13 01:57:43.200039 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:57:43.216933 env[1215]: time="2024-12-13T01:57:43.216874997Z" level=info msg="shim disconnected" id=714567f9d6627484b725f72f11cfa24a765c629a1df59541d2d26259d08504d8 Dec 13 01:57:43.216933 env[1215]: time="2024-12-13T01:57:43.216930651Z" level=warning msg="cleaning up after shim disconnected" id=714567f9d6627484b725f72f11cfa24a765c629a1df59541d2d26259d08504d8 namespace=k8s.io Dec 13 01:57:43.217156 env[1215]: time="2024-12-13T01:57:43.216946260Z" level=info msg="cleaning up dead shim" Dec 13 01:57:43.222805 env[1215]: time="2024-12-13T01:57:43.222771848Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:57:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1831 runtime=io.containerd.runc.v2\n" Dec 13 01:57:43.637270 kubelet[1414]: E1213 01:57:43.637221 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:44.117299 kubelet[1414]: E1213 01:57:44.117258 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:44.118597 env[1215]: time="2024-12-13T01:57:44.118565701Z" level=info msg="CreateContainer within sandbox \"f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:57:44.127734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-714567f9d6627484b725f72f11cfa24a765c629a1df59541d2d26259d08504d8-rootfs.mount: Deactivated successfully. Dec 13 01:57:44.134444 env[1215]: time="2024-12-13T01:57:44.134381958Z" level=info msg="CreateContainer within sandbox \"f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"db9e7b5ced64919f4e0e3de19665b4716c44c66798e3e291b9725eae7cd24419\"" Dec 13 01:57:44.134789 env[1215]: time="2024-12-13T01:57:44.134767090Z" level=info msg="StartContainer for \"db9e7b5ced64919f4e0e3de19665b4716c44c66798e3e291b9725eae7cd24419\"" Dec 13 01:57:44.152980 systemd[1]: Started cri-containerd-db9e7b5ced64919f4e0e3de19665b4716c44c66798e3e291b9725eae7cd24419.scope. Dec 13 01:57:44.196309 env[1215]: time="2024-12-13T01:57:44.196245306Z" level=info msg="StartContainer for \"db9e7b5ced64919f4e0e3de19665b4716c44c66798e3e291b9725eae7cd24419\" returns successfully" Dec 13 01:57:44.197261 systemd[1]: cri-containerd-db9e7b5ced64919f4e0e3de19665b4716c44c66798e3e291b9725eae7cd24419.scope: Deactivated successfully. Dec 13 01:57:44.212960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db9e7b5ced64919f4e0e3de19665b4716c44c66798e3e291b9725eae7cd24419-rootfs.mount: Deactivated successfully. Dec 13 01:57:44.218721 env[1215]: time="2024-12-13T01:57:44.218673735Z" level=info msg="shim disconnected" id=db9e7b5ced64919f4e0e3de19665b4716c44c66798e3e291b9725eae7cd24419 Dec 13 01:57:44.218899 env[1215]: time="2024-12-13T01:57:44.218724590Z" level=warning msg="cleaning up after shim disconnected" id=db9e7b5ced64919f4e0e3de19665b4716c44c66798e3e291b9725eae7cd24419 namespace=k8s.io Dec 13 01:57:44.218899 env[1215]: time="2024-12-13T01:57:44.218733697Z" level=info msg="cleaning up dead shim" Dec 13 01:57:44.224902 env[1215]: time="2024-12-13T01:57:44.224852645Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:57:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1887 runtime=io.containerd.runc.v2\n" Dec 13 01:57:44.627093 kubelet[1414]: E1213 01:57:44.627053 1414 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:44.637355 kubelet[1414]: E1213 01:57:44.637337 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:45.121180 kubelet[1414]: E1213 01:57:45.121148 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:45.122592 env[1215]: time="2024-12-13T01:57:45.122555018Z" level=info msg="CreateContainer within sandbox \"f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:57:45.215227 env[1215]: time="2024-12-13T01:57:45.215175798Z" level=info msg="CreateContainer within sandbox \"f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7f529dfa0c8896851e273300fb565bb6113c0f8d11c52016aff2b42d8383b7cd\"" Dec 13 01:57:45.215661 env[1215]: time="2024-12-13T01:57:45.215637093Z" level=info msg="StartContainer for \"7f529dfa0c8896851e273300fb565bb6113c0f8d11c52016aff2b42d8383b7cd\"" Dec 13 01:57:45.232856 systemd[1]: Started cri-containerd-7f529dfa0c8896851e273300fb565bb6113c0f8d11c52016aff2b42d8383b7cd.scope. Dec 13 01:57:45.259998 systemd[1]: cri-containerd-7f529dfa0c8896851e273300fb565bb6113c0f8d11c52016aff2b42d8383b7cd.scope: Deactivated successfully. Dec 13 01:57:45.262716 env[1215]: time="2024-12-13T01:57:45.262659784Z" level=info msg="StartContainer for \"7f529dfa0c8896851e273300fb565bb6113c0f8d11c52016aff2b42d8383b7cd\" returns successfully" Dec 13 01:57:45.283582 env[1215]: time="2024-12-13T01:57:45.283521375Z" level=info msg="shim disconnected" id=7f529dfa0c8896851e273300fb565bb6113c0f8d11c52016aff2b42d8383b7cd Dec 13 01:57:45.283582 env[1215]: time="2024-12-13T01:57:45.283568904Z" level=warning msg="cleaning up after shim disconnected" id=7f529dfa0c8896851e273300fb565bb6113c0f8d11c52016aff2b42d8383b7cd namespace=k8s.io Dec 13 01:57:45.283582 env[1215]: time="2024-12-13T01:57:45.283578071Z" level=info msg="cleaning up dead shim" Dec 13 01:57:45.290659 env[1215]: time="2024-12-13T01:57:45.290621392Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:57:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1942 runtime=io.containerd.runc.v2\n" Dec 13 01:57:45.637842 kubelet[1414]: E1213 01:57:45.637805 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:46.124201 kubelet[1414]: E1213 01:57:46.124168 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:46.125572 env[1215]: time="2024-12-13T01:57:46.125530836Z" level=info msg="CreateContainer within sandbox \"f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:57:46.209338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f529dfa0c8896851e273300fb565bb6113c0f8d11c52016aff2b42d8383b7cd-rootfs.mount: Deactivated successfully. Dec 13 01:57:46.492450 env[1215]: time="2024-12-13T01:57:46.492408571Z" level=info msg="CreateContainer within sandbox \"f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a\"" Dec 13 01:57:46.492859 env[1215]: time="2024-12-13T01:57:46.492830278Z" level=info msg="StartContainer for \"0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a\"" Dec 13 01:57:46.508004 systemd[1]: Started cri-containerd-0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a.scope. Dec 13 01:57:46.530638 env[1215]: time="2024-12-13T01:57:46.530583645Z" level=info msg="StartContainer for \"0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a\" returns successfully" Dec 13 01:57:46.589966 kubelet[1414]: I1213 01:57:46.589295 1414 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 01:57:46.639015 kubelet[1414]: E1213 01:57:46.638963 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:46.839098 kernel: Initializing XFRM netlink socket Dec 13 01:57:47.127762 kubelet[1414]: E1213 01:57:47.127720 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:47.209528 systemd[1]: run-containerd-runc-k8s.io-0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a-runc.nWORDE.mount: Deactivated successfully. Dec 13 01:57:47.210449 kubelet[1414]: I1213 01:57:47.210400 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qxvj7" podStartSLOduration=8.028952087 podStartE2EDuration="22.210384401s" podCreationTimestamp="2024-12-13 01:57:25 +0000 UTC" firstStartedPulling="2024-12-13 01:57:26.895433828 +0000 UTC m=+2.501631356" lastFinishedPulling="2024-12-13 01:57:41.076866132 +0000 UTC m=+16.683063670" observedRunningTime="2024-12-13 01:57:47.210218391 +0000 UTC m=+22.816415939" watchObservedRunningTime="2024-12-13 01:57:47.210384401 +0000 UTC m=+22.816581929" Dec 13 01:57:47.639747 kubelet[1414]: E1213 01:57:47.639691 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:48.129520 kubelet[1414]: E1213 01:57:48.129479 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:48.255003 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 01:57:48.251281 systemd-networkd[1035]: cilium_host: Link UP Dec 13 01:57:48.251386 systemd-networkd[1035]: cilium_net: Link UP Dec 13 01:57:48.251389 systemd-networkd[1035]: cilium_net: Gained carrier Dec 13 01:57:48.251504 systemd-networkd[1035]: cilium_host: Gained carrier Dec 13 01:57:48.256457 systemd-networkd[1035]: cilium_host: Gained IPv6LL Dec 13 01:57:48.323387 systemd-networkd[1035]: cilium_vxlan: Link UP Dec 13 01:57:48.323393 systemd-networkd[1035]: cilium_vxlan: Gained carrier Dec 13 01:57:48.576013 kernel: NET: Registered PF_ALG protocol family Dec 13 01:57:48.640270 kubelet[1414]: E1213 01:57:48.640225 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:48.818108 systemd-networkd[1035]: cilium_net: Gained IPv6LL Dec 13 01:57:49.131333 kubelet[1414]: E1213 01:57:49.131305 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:49.150737 systemd-networkd[1035]: lxc_health: Link UP Dec 13 01:57:49.160668 systemd-networkd[1035]: lxc_health: Gained carrier Dec 13 01:57:49.161067 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 01:57:49.641223 kubelet[1414]: E1213 01:57:49.641145 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:50.132445 kubelet[1414]: E1213 01:57:50.132413 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:50.226125 systemd-networkd[1035]: cilium_vxlan: Gained IPv6LL Dec 13 01:57:50.642042 kubelet[1414]: E1213 01:57:50.641991 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:50.802228 systemd-networkd[1035]: lxc_health: Gained IPv6LL Dec 13 01:57:51.134774 kubelet[1414]: E1213 01:57:51.134730 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:51.642694 kubelet[1414]: E1213 01:57:51.642655 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:51.918820 systemd[1]: Created slice kubepods-besteffort-pod7aeb30b9_a0c7_4e61_b633_52a502bb2d87.slice. Dec 13 01:57:52.010041 kubelet[1414]: I1213 01:57:52.009954 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6748b\" (UniqueName: \"kubernetes.io/projected/7aeb30b9-a0c7-4e61-b633-52a502bb2d87-kube-api-access-6748b\") pod \"nginx-deployment-8587fbcb89-w6pnr\" (UID: \"7aeb30b9-a0c7-4e61-b633-52a502bb2d87\") " pod="default/nginx-deployment-8587fbcb89-w6pnr" Dec 13 01:57:52.135965 kubelet[1414]: E1213 01:57:52.135933 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:52.222854 env[1215]: time="2024-12-13T01:57:52.222743938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-w6pnr,Uid:7aeb30b9-a0c7-4e61-b633-52a502bb2d87,Namespace:default,Attempt:0,}" Dec 13 01:57:52.643058 kubelet[1414]: E1213 01:57:52.642928 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:52.995560 systemd-networkd[1035]: lxc9b8505b4847e: Link UP Dec 13 01:57:53.003799 kernel: eth0: renamed from tmp7932a Dec 13 01:57:53.009557 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:57:53.009608 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9b8505b4847e: link becomes ready Dec 13 01:57:53.009602 systemd-networkd[1035]: lxc9b8505b4847e: Gained carrier Dec 13 01:57:53.643238 env[1215]: time="2024-12-13T01:57:53.643152918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:53.643238 env[1215]: time="2024-12-13T01:57:53.643202503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:53.643720 kubelet[1414]: E1213 01:57:53.643679 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:53.643954 env[1215]: time="2024-12-13T01:57:53.643214997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:53.643954 env[1215]: time="2024-12-13T01:57:53.643355847Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7932a1fa2c0b096634e52dbb155f98144dcc6bc9162c0e47406b4f8081ef1736 pid=2506 runtime=io.containerd.runc.v2 Dec 13 01:57:53.657588 systemd[1]: Started cri-containerd-7932a1fa2c0b096634e52dbb155f98144dcc6bc9162c0e47406b4f8081ef1736.scope. Dec 13 01:57:53.670781 systemd-resolved[1149]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:57:53.691387 env[1215]: time="2024-12-13T01:57:53.691348256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-w6pnr,Uid:7aeb30b9-a0c7-4e61-b633-52a502bb2d87,Namespace:default,Attempt:0,} returns sandbox id \"7932a1fa2c0b096634e52dbb155f98144dcc6bc9162c0e47406b4f8081ef1736\"" Dec 13 01:57:53.692761 env[1215]: time="2024-12-13T01:57:53.692738076Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:57:54.258199 systemd-networkd[1035]: lxc9b8505b4847e: Gained IPv6LL Dec 13 01:57:54.644290 kubelet[1414]: E1213 01:57:54.644151 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:55.645012 kubelet[1414]: E1213 01:57:55.644950 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:56.646059 kubelet[1414]: E1213 01:57:56.645994 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:57.332507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1513625785.mount: Deactivated successfully. Dec 13 01:57:57.646991 kubelet[1414]: E1213 01:57:57.646837 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:58.647556 kubelet[1414]: E1213 01:57:58.647513 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:58.677090 update_engine[1209]: I1213 01:57:58.677047 1209 update_attempter.cc:509] Updating boot flags... Dec 13 01:57:59.531762 env[1215]: time="2024-12-13T01:57:59.531686372Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:59.533644 env[1215]: time="2024-12-13T01:57:59.533601404Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:59.535307 env[1215]: time="2024-12-13T01:57:59.535267734Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:59.537161 env[1215]: time="2024-12-13T01:57:59.537127460Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:59.538015 env[1215]: time="2024-12-13T01:57:59.537959432Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 01:57:59.541108 env[1215]: time="2024-12-13T01:57:59.541056072Z" level=info msg="CreateContainer within sandbox \"7932a1fa2c0b096634e52dbb155f98144dcc6bc9162c0e47406b4f8081ef1736\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 01:57:59.553062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3452675170.mount: Deactivated successfully. Dec 13 01:57:59.554187 env[1215]: time="2024-12-13T01:57:59.554140755Z" level=info msg="CreateContainer within sandbox \"7932a1fa2c0b096634e52dbb155f98144dcc6bc9162c0e47406b4f8081ef1736\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"570f86b22967fed61730ae0db0c81976bd7761e4f847c37b045f9841b9466c1e\"" Dec 13 01:57:59.554805 env[1215]: time="2024-12-13T01:57:59.554776724Z" level=info msg="StartContainer for \"570f86b22967fed61730ae0db0c81976bd7761e4f847c37b045f9841b9466c1e\"" Dec 13 01:57:59.583050 systemd[1]: run-containerd-runc-k8s.io-570f86b22967fed61730ae0db0c81976bd7761e4f847c37b045f9841b9466c1e-runc.yHSfTa.mount: Deactivated successfully. Dec 13 01:57:59.584279 systemd[1]: Started cri-containerd-570f86b22967fed61730ae0db0c81976bd7761e4f847c37b045f9841b9466c1e.scope. Dec 13 01:57:59.608748 env[1215]: time="2024-12-13T01:57:59.608687659Z" level=info msg="StartContainer for \"570f86b22967fed61730ae0db0c81976bd7761e4f847c37b045f9841b9466c1e\" returns successfully" Dec 13 01:57:59.648697 kubelet[1414]: E1213 01:57:59.648633 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:00.649828 kubelet[1414]: E1213 01:58:00.649776 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:01.650425 kubelet[1414]: E1213 01:58:01.650366 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:02.651550 kubelet[1414]: E1213 01:58:02.651508 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:03.652241 kubelet[1414]: E1213 01:58:03.652196 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:04.324319 kubelet[1414]: I1213 01:58:04.324228 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-w6pnr" podStartSLOduration=7.47742231 podStartE2EDuration="13.324209048s" podCreationTimestamp="2024-12-13 01:57:51 +0000 UTC" firstStartedPulling="2024-12-13 01:57:53.692322591 +0000 UTC m=+29.298520119" lastFinishedPulling="2024-12-13 01:57:59.539109339 +0000 UTC m=+35.145306857" observedRunningTime="2024-12-13 01:58:00.160224059 +0000 UTC m=+35.766421607" watchObservedRunningTime="2024-12-13 01:58:04.324209048 +0000 UTC m=+39.930406576" Dec 13 01:58:04.329442 systemd[1]: Created slice kubepods-besteffort-pod0576cf29_4978_40df_b570_16f75606d30f.slice. Dec 13 01:58:04.379389 kubelet[1414]: I1213 01:58:04.379315 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/0576cf29-4978-40df-b570-16f75606d30f-data\") pod \"nfs-server-provisioner-0\" (UID: \"0576cf29-4978-40df-b570-16f75606d30f\") " pod="default/nfs-server-provisioner-0" Dec 13 01:58:04.379389 kubelet[1414]: I1213 01:58:04.379375 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r69dx\" (UniqueName: \"kubernetes.io/projected/0576cf29-4978-40df-b570-16f75606d30f-kube-api-access-r69dx\") pod \"nfs-server-provisioner-0\" (UID: \"0576cf29-4978-40df-b570-16f75606d30f\") " pod="default/nfs-server-provisioner-0" Dec 13 01:58:04.627948 kubelet[1414]: E1213 01:58:04.626936 1414 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:04.632810 env[1215]: time="2024-12-13T01:58:04.632755826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0576cf29-4978-40df-b570-16f75606d30f,Namespace:default,Attempt:0,}" Dec 13 01:58:04.653271 kubelet[1414]: E1213 01:58:04.653217 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:05.512343 systemd-networkd[1035]: lxc51bfee50bf75: Link UP Dec 13 01:58:05.519998 kernel: eth0: renamed from tmp9df0a Dec 13 01:58:05.528680 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:58:05.528775 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc51bfee50bf75: link becomes ready Dec 13 01:58:05.528541 systemd-networkd[1035]: lxc51bfee50bf75: Gained carrier Dec 13 01:58:05.653518 kubelet[1414]: E1213 01:58:05.653475 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:05.653819 env[1215]: time="2024-12-13T01:58:05.653733917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:58:05.653819 env[1215]: time="2024-12-13T01:58:05.653770216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:58:05.653819 env[1215]: time="2024-12-13T01:58:05.653780475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:58:05.654057 env[1215]: time="2024-12-13T01:58:05.653880514Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9df0a95c6daa53f39b502db41ba666a67f69cb2431ebcf8f706860994d40d8a6 pid=2651 runtime=io.containerd.runc.v2 Dec 13 01:58:05.667634 systemd[1]: Started cri-containerd-9df0a95c6daa53f39b502db41ba666a67f69cb2431ebcf8f706860994d40d8a6.scope. Dec 13 01:58:05.676276 systemd-resolved[1149]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:58:05.694918 env[1215]: time="2024-12-13T01:58:05.694864362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0576cf29-4978-40df-b570-16f75606d30f,Namespace:default,Attempt:0,} returns sandbox id \"9df0a95c6daa53f39b502db41ba666a67f69cb2431ebcf8f706860994d40d8a6\"" Dec 13 01:58:05.696480 env[1215]: time="2024-12-13T01:58:05.696450836Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 01:58:06.546224 systemd-networkd[1035]: lxc51bfee50bf75: Gained IPv6LL Dec 13 01:58:06.654241 kubelet[1414]: E1213 01:58:06.654175 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:07.655311 kubelet[1414]: E1213 01:58:07.655255 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:08.514895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2638308354.mount: Deactivated successfully. Dec 13 01:58:08.656072 kubelet[1414]: E1213 01:58:08.656022 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:09.656656 kubelet[1414]: E1213 01:58:09.656599 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:10.657651 kubelet[1414]: E1213 01:58:10.657599 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:11.231752 env[1215]: time="2024-12-13T01:58:11.231696435Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:11.233543 env[1215]: time="2024-12-13T01:58:11.233516992Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:11.235234 env[1215]: time="2024-12-13T01:58:11.235210528Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:11.236752 env[1215]: time="2024-12-13T01:58:11.236728363Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:11.237410 env[1215]: time="2024-12-13T01:58:11.237386496Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 01:58:11.239227 env[1215]: time="2024-12-13T01:58:11.239197685Z" level=info msg="CreateContainer within sandbox \"9df0a95c6daa53f39b502db41ba666a67f69cb2431ebcf8f706860994d40d8a6\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 01:58:11.294584 env[1215]: time="2024-12-13T01:58:11.294526399Z" level=info msg="CreateContainer within sandbox \"9df0a95c6daa53f39b502db41ba666a67f69cb2431ebcf8f706860994d40d8a6\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"701a705bff1676a81345b948c2fabf438f5f5ce4d496ce9c0c14cc223e1afef6\"" Dec 13 01:58:11.295046 env[1215]: time="2024-12-13T01:58:11.295012517Z" level=info msg="StartContainer for \"701a705bff1676a81345b948c2fabf438f5f5ce4d496ce9c0c14cc223e1afef6\"" Dec 13 01:58:11.309297 systemd[1]: Started cri-containerd-701a705bff1676a81345b948c2fabf438f5f5ce4d496ce9c0c14cc223e1afef6.scope. Dec 13 01:58:11.327488 env[1215]: time="2024-12-13T01:58:11.327442345Z" level=info msg="StartContainer for \"701a705bff1676a81345b948c2fabf438f5f5ce4d496ce9c0c14cc223e1afef6\" returns successfully" Dec 13 01:58:11.657998 kubelet[1414]: E1213 01:58:11.657953 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:12.182641 kubelet[1414]: I1213 01:58:12.182583 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.640239976 podStartE2EDuration="8.182567226s" podCreationTimestamp="2024-12-13 01:58:04 +0000 UTC" firstStartedPulling="2024-12-13 01:58:05.695917276 +0000 UTC m=+41.302114794" lastFinishedPulling="2024-12-13 01:58:11.238244516 +0000 UTC m=+46.844442044" observedRunningTime="2024-12-13 01:58:12.182128599 +0000 UTC m=+47.788326127" watchObservedRunningTime="2024-12-13 01:58:12.182567226 +0000 UTC m=+47.788764754" Dec 13 01:58:12.658176 kubelet[1414]: E1213 01:58:12.658099 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:13.658656 kubelet[1414]: E1213 01:58:13.658595 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:14.659527 kubelet[1414]: E1213 01:58:14.659476 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:15.659843 kubelet[1414]: E1213 01:58:15.659802 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:16.659993 kubelet[1414]: E1213 01:58:16.659927 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:17.660945 kubelet[1414]: E1213 01:58:17.660870 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:18.661792 kubelet[1414]: E1213 01:58:18.661737 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:19.662374 kubelet[1414]: E1213 01:58:19.662277 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:20.663326 kubelet[1414]: E1213 01:58:20.663250 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:21.327582 systemd[1]: Created slice kubepods-besteffort-pod96d1eb22_7e65_4b1b_812a_2bdfefa0726d.slice. Dec 13 01:58:21.374358 kubelet[1414]: I1213 01:58:21.374326 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bbde63df-305d-4da0-a60d-6ed358230883\" (UniqueName: \"kubernetes.io/nfs/96d1eb22-7e65-4b1b-812a-2bdfefa0726d-pvc-bbde63df-305d-4da0-a60d-6ed358230883\") pod \"test-pod-1\" (UID: \"96d1eb22-7e65-4b1b-812a-2bdfefa0726d\") " pod="default/test-pod-1" Dec 13 01:58:21.374513 kubelet[1414]: I1213 01:58:21.374364 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kfb7\" (UniqueName: \"kubernetes.io/projected/96d1eb22-7e65-4b1b-812a-2bdfefa0726d-kube-api-access-7kfb7\") pod \"test-pod-1\" (UID: \"96d1eb22-7e65-4b1b-812a-2bdfefa0726d\") " pod="default/test-pod-1" Dec 13 01:58:21.493995 kernel: FS-Cache: Loaded Dec 13 01:58:21.532438 kernel: RPC: Registered named UNIX socket transport module. Dec 13 01:58:21.532537 kernel: RPC: Registered udp transport module. Dec 13 01:58:21.532563 kernel: RPC: Registered tcp transport module. Dec 13 01:58:21.533246 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 01:58:21.589005 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 01:58:21.663834 kubelet[1414]: E1213 01:58:21.663782 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:21.763735 kernel: NFS: Registering the id_resolver key type Dec 13 01:58:21.763817 kernel: Key type id_resolver registered Dec 13 01:58:21.763848 kernel: Key type id_legacy registered Dec 13 01:58:21.786122 nfsidmap[2770]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 01:58:21.788623 nfsidmap[2773]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 01:58:21.930411 env[1215]: time="2024-12-13T01:58:21.930297347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:96d1eb22-7e65-4b1b-812a-2bdfefa0726d,Namespace:default,Attempt:0,}" Dec 13 01:58:21.953152 systemd-networkd[1035]: lxcdd60546fd468: Link UP Dec 13 01:58:21.961000 kernel: eth0: renamed from tmp84ba9 Dec 13 01:58:21.968486 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:58:21.968561 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcdd60546fd468: link becomes ready Dec 13 01:58:21.968538 systemd-networkd[1035]: lxcdd60546fd468: Gained carrier Dec 13 01:58:22.137545 env[1215]: time="2024-12-13T01:58:22.137467961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:58:22.137545 env[1215]: time="2024-12-13T01:58:22.137511894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:58:22.137545 env[1215]: time="2024-12-13T01:58:22.137532853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:58:22.138031 env[1215]: time="2024-12-13T01:58:22.137924581Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/84ba955918f79af64ce188ac77e01a5353f92b772802920cade669fcc286ffb6 pid=2808 runtime=io.containerd.runc.v2 Dec 13 01:58:22.149851 systemd[1]: Started cri-containerd-84ba955918f79af64ce188ac77e01a5353f92b772802920cade669fcc286ffb6.scope. Dec 13 01:58:22.161572 systemd-resolved[1149]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:58:22.180559 env[1215]: time="2024-12-13T01:58:22.180435640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:96d1eb22-7e65-4b1b-812a-2bdfefa0726d,Namespace:default,Attempt:0,} returns sandbox id \"84ba955918f79af64ce188ac77e01a5353f92b772802920cade669fcc286ffb6\"" Dec 13 01:58:22.182169 env[1215]: time="2024-12-13T01:58:22.182147200Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:58:22.581344 env[1215]: time="2024-12-13T01:58:22.581208468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:22.583595 env[1215]: time="2024-12-13T01:58:22.583550875Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:22.585321 env[1215]: time="2024-12-13T01:58:22.585255653Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:22.586984 env[1215]: time="2024-12-13T01:58:22.586941655Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:22.587561 env[1215]: time="2024-12-13T01:58:22.587522439Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 01:58:22.589692 env[1215]: time="2024-12-13T01:58:22.589663677Z" level=info msg="CreateContainer within sandbox \"84ba955918f79af64ce188ac77e01a5353f92b772802920cade669fcc286ffb6\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 01:58:22.601555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1125988337.mount: Deactivated successfully. Dec 13 01:58:22.603948 env[1215]: time="2024-12-13T01:58:22.603900525Z" level=info msg="CreateContainer within sandbox \"84ba955918f79af64ce188ac77e01a5353f92b772802920cade669fcc286ffb6\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"d64d99ee65289f3d503dc95450fadf772e4f2883f3e10c09c84df2bb735c2ee3\"" Dec 13 01:58:22.604398 env[1215]: time="2024-12-13T01:58:22.604371912Z" level=info msg="StartContainer for \"d64d99ee65289f3d503dc95450fadf772e4f2883f3e10c09c84df2bb735c2ee3\"" Dec 13 01:58:22.618035 systemd[1]: Started cri-containerd-d64d99ee65289f3d503dc95450fadf772e4f2883f3e10c09c84df2bb735c2ee3.scope. Dec 13 01:58:22.639914 env[1215]: time="2024-12-13T01:58:22.639861991Z" level=info msg="StartContainer for \"d64d99ee65289f3d503dc95450fadf772e4f2883f3e10c09c84df2bb735c2ee3\" returns successfully" Dec 13 01:58:22.664892 kubelet[1414]: E1213 01:58:22.664850 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:23.199142 kubelet[1414]: I1213 01:58:23.199089 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.792284961 podStartE2EDuration="19.199075052s" podCreationTimestamp="2024-12-13 01:58:04 +0000 UTC" firstStartedPulling="2024-12-13 01:58:22.18170099 +0000 UTC m=+57.787898518" lastFinishedPulling="2024-12-13 01:58:22.588491081 +0000 UTC m=+58.194688609" observedRunningTime="2024-12-13 01:58:23.19876594 +0000 UTC m=+58.804963468" watchObservedRunningTime="2024-12-13 01:58:23.199075052 +0000 UTC m=+58.805272580" Dec 13 01:58:23.665803 kubelet[1414]: E1213 01:58:23.665773 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:23.698261 systemd-networkd[1035]: lxcdd60546fd468: Gained IPv6LL Dec 13 01:58:24.626995 kubelet[1414]: E1213 01:58:24.626931 1414 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:24.666222 kubelet[1414]: E1213 01:58:24.666176 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:25.666347 kubelet[1414]: E1213 01:58:25.666291 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:26.666890 kubelet[1414]: E1213 01:58:26.666817 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:26.712493 env[1215]: time="2024-12-13T01:58:26.712419881Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:58:26.717643 env[1215]: time="2024-12-13T01:58:26.717608538Z" level=info msg="StopContainer for \"0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a\" with timeout 2 (s)" Dec 13 01:58:26.717907 env[1215]: time="2024-12-13T01:58:26.717880589Z" level=info msg="Stop container \"0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a\" with signal terminated" Dec 13 01:58:26.723263 systemd-networkd[1035]: lxc_health: Link DOWN Dec 13 01:58:26.723273 systemd-networkd[1035]: lxc_health: Lost carrier Dec 13 01:58:26.750352 systemd[1]: cri-containerd-0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a.scope: Deactivated successfully. Dec 13 01:58:26.750677 systemd[1]: cri-containerd-0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a.scope: Consumed 6.467s CPU time. Dec 13 01:58:26.765126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a-rootfs.mount: Deactivated successfully. Dec 13 01:58:27.667057 kubelet[1414]: E1213 01:58:27.666996 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:27.896348 env[1215]: time="2024-12-13T01:58:27.896293498Z" level=info msg="shim disconnected" id=0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a Dec 13 01:58:27.896348 env[1215]: time="2024-12-13T01:58:27.896340196Z" level=warning msg="cleaning up after shim disconnected" id=0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a namespace=k8s.io Dec 13 01:58:27.896348 env[1215]: time="2024-12-13T01:58:27.896349103Z" level=info msg="cleaning up dead shim" Dec 13 01:58:27.902436 env[1215]: time="2024-12-13T01:58:27.902381504Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:58:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2940 runtime=io.containerd.runc.v2\n" Dec 13 01:58:27.949657 env[1215]: time="2024-12-13T01:58:27.949540231Z" level=info msg="StopContainer for \"0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a\" returns successfully" Dec 13 01:58:27.950180 env[1215]: time="2024-12-13T01:58:27.950160608Z" level=info msg="StopPodSandbox for \"f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852\"" Dec 13 01:58:27.950225 env[1215]: time="2024-12-13T01:58:27.950211804Z" level=info msg="Container to stop \"714567f9d6627484b725f72f11cfa24a765c629a1df59541d2d26259d08504d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:58:27.950256 env[1215]: time="2024-12-13T01:58:27.950224658Z" level=info msg="Container to stop \"db9e7b5ced64919f4e0e3de19665b4716c44c66798e3e291b9725eae7cd24419\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:58:27.950256 env[1215]: time="2024-12-13T01:58:27.950233134Z" level=info msg="Container to stop \"0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:58:27.950256 env[1215]: time="2024-12-13T01:58:27.950242642Z" level=info msg="Container to stop \"8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:58:27.950419 env[1215]: time="2024-12-13T01:58:27.950251218Z" level=info msg="Container to stop \"7f529dfa0c8896851e273300fb565bb6113c0f8d11c52016aff2b42d8383b7cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:58:27.952413 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852-shm.mount: Deactivated successfully. Dec 13 01:58:27.954589 systemd[1]: cri-containerd-f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852.scope: Deactivated successfully. Dec 13 01:58:27.967357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852-rootfs.mount: Deactivated successfully. Dec 13 01:58:27.973907 env[1215]: time="2024-12-13T01:58:27.973853580Z" level=info msg="shim disconnected" id=f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852 Dec 13 01:58:27.974040 env[1215]: time="2024-12-13T01:58:27.973906490Z" level=warning msg="cleaning up after shim disconnected" id=f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852 namespace=k8s.io Dec 13 01:58:27.974040 env[1215]: time="2024-12-13T01:58:27.973924224Z" level=info msg="cleaning up dead shim" Dec 13 01:58:27.979723 env[1215]: time="2024-12-13T01:58:27.979667821Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:58:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2971 runtime=io.containerd.runc.v2\n" Dec 13 01:58:27.980029 env[1215]: time="2024-12-13T01:58:27.980005576Z" level=info msg="TearDown network for sandbox \"f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852\" successfully" Dec 13 01:58:27.980075 env[1215]: time="2024-12-13T01:58:27.980031695Z" level=info msg="StopPodSandbox for \"f16cf1bfe1ecd2f6ce9fa63018632a7b28436537ee132234afdbcd7bfeb48852\" returns successfully" Dec 13 01:58:28.112435 kubelet[1414]: I1213 01:58:28.112372 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-cilium-config-path\") pod \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " Dec 13 01:58:28.112435 kubelet[1414]: I1213 01:58:28.112414 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-bpf-maps\") pod \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " Dec 13 01:58:28.112435 kubelet[1414]: I1213 01:58:28.112432 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-host-proc-sys-kernel\") pod \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " Dec 13 01:58:28.112435 kubelet[1414]: I1213 01:58:28.112445 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-xtables-lock\") pod \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " Dec 13 01:58:28.112752 kubelet[1414]: I1213 01:58:28.112457 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-etc-cni-netd\") pod \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " Dec 13 01:58:28.112752 kubelet[1414]: I1213 01:58:28.112471 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-hubble-tls\") pod \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " Dec 13 01:58:28.112752 kubelet[1414]: I1213 01:58:28.112494 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-lib-modules\") pod \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " Dec 13 01:58:28.112752 kubelet[1414]: I1213 01:58:28.112548 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" (UID: "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:28.112752 kubelet[1414]: I1213 01:58:28.112548 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" (UID: "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:28.112910 kubelet[1414]: I1213 01:58:28.112586 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" (UID: "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:28.112910 kubelet[1414]: I1213 01:58:28.112828 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-cni-path\") pod \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " Dec 13 01:58:28.112910 kubelet[1414]: I1213 01:58:28.112846 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-clustermesh-secrets\") pod \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " Dec 13 01:58:28.112910 kubelet[1414]: I1213 01:58:28.112859 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-host-proc-sys-net\") pod \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " Dec 13 01:58:28.112910 kubelet[1414]: I1213 01:58:28.112869 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-hostproc\") pod \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " Dec 13 01:58:28.112910 kubelet[1414]: I1213 01:58:28.112880 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-cilium-cgroup\") pod \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " Dec 13 01:58:28.113141 kubelet[1414]: I1213 01:58:28.112894 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmhl2\" (UniqueName: \"kubernetes.io/projected/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-kube-api-access-zmhl2\") pod \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " Dec 13 01:58:28.113141 kubelet[1414]: I1213 01:58:28.112905 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-cilium-run\") pod \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\" (UID: \"5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07\") " Dec 13 01:58:28.113141 kubelet[1414]: I1213 01:58:28.112930 1414 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-bpf-maps\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:28.113141 kubelet[1414]: I1213 01:58:28.112940 1414 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-host-proc-sys-kernel\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:28.113141 kubelet[1414]: I1213 01:58:28.112947 1414 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-etc-cni-netd\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:28.113141 kubelet[1414]: I1213 01:58:28.112967 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" (UID: "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:28.113349 kubelet[1414]: I1213 01:58:28.113009 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" (UID: "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:28.113349 kubelet[1414]: I1213 01:58:28.113005 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" (UID: "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:28.113349 kubelet[1414]: I1213 01:58:28.113029 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-cni-path" (OuterVolumeSpecName: "cni-path") pod "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" (UID: "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:28.113349 kubelet[1414]: I1213 01:58:28.113066 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-hostproc" (OuterVolumeSpecName: "hostproc") pod "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" (UID: "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:28.113349 kubelet[1414]: I1213 01:58:28.113131 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" (UID: "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:28.113535 kubelet[1414]: I1213 01:58:28.113160 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" (UID: "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:28.114911 kubelet[1414]: I1213 01:58:28.114889 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" (UID: "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:58:28.116206 kubelet[1414]: I1213 01:58:28.115222 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" (UID: "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:58:28.116206 kubelet[1414]: I1213 01:58:28.115405 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" (UID: "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:58:28.116797 systemd[1]: var-lib-kubelet-pods-5dc25ab5\x2ded19\x2d4c39\x2da96a\x2dc64c0bbd1e07-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:58:28.116884 systemd[1]: var-lib-kubelet-pods-5dc25ab5\x2ded19\x2d4c39\x2da96a\x2dc64c0bbd1e07-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:58:28.118067 kubelet[1414]: I1213 01:58:28.118044 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-kube-api-access-zmhl2" (OuterVolumeSpecName: "kube-api-access-zmhl2") pod "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" (UID: "5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07"). InnerVolumeSpecName "kube-api-access-zmhl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:58:28.119116 systemd[1]: var-lib-kubelet-pods-5dc25ab5\x2ded19\x2d4c39\x2da96a\x2dc64c0bbd1e07-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzmhl2.mount: Deactivated successfully. Dec 13 01:58:28.204230 kubelet[1414]: I1213 01:58:28.204125 1414 scope.go:117] "RemoveContainer" containerID="0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a" Dec 13 01:58:28.206678 env[1215]: time="2024-12-13T01:58:28.206632787Z" level=info msg="RemoveContainer for \"0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a\"" Dec 13 01:58:28.207880 systemd[1]: Removed slice kubepods-burstable-pod5dc25ab5_ed19_4c39_a96a_c64c0bbd1e07.slice. Dec 13 01:58:28.208003 systemd[1]: kubepods-burstable-pod5dc25ab5_ed19_4c39_a96a_c64c0bbd1e07.slice: Consumed 6.660s CPU time. Dec 13 01:58:28.213214 kubelet[1414]: I1213 01:58:28.213186 1414 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-xtables-lock\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:28.213214 kubelet[1414]: I1213 01:58:28.213208 1414 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-hubble-tls\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:28.213214 kubelet[1414]: I1213 01:58:28.213215 1414 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-hostproc\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:28.213214 kubelet[1414]: I1213 01:58:28.213221 1414 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-lib-modules\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:28.213214 kubelet[1414]: I1213 01:58:28.213227 1414 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-cni-path\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:28.213214 kubelet[1414]: I1213 01:58:28.213234 1414 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-clustermesh-secrets\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:28.213531 kubelet[1414]: I1213 01:58:28.213241 1414 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-host-proc-sys-net\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:28.213531 kubelet[1414]: I1213 01:58:28.213247 1414 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-cilium-cgroup\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:28.213531 kubelet[1414]: I1213 01:58:28.213253 1414 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zmhl2\" (UniqueName: \"kubernetes.io/projected/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-kube-api-access-zmhl2\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:28.213531 kubelet[1414]: I1213 01:58:28.213260 1414 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-cilium-run\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:28.213531 kubelet[1414]: I1213 01:58:28.213266 1414 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07-cilium-config-path\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:28.214920 env[1215]: time="2024-12-13T01:58:28.214882524Z" level=info msg="RemoveContainer for \"0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a\" returns successfully" Dec 13 01:58:28.215202 kubelet[1414]: I1213 01:58:28.215176 1414 scope.go:117] "RemoveContainer" containerID="7f529dfa0c8896851e273300fb565bb6113c0f8d11c52016aff2b42d8383b7cd" Dec 13 01:58:28.216222 env[1215]: time="2024-12-13T01:58:28.216191294Z" level=info msg="RemoveContainer for \"7f529dfa0c8896851e273300fb565bb6113c0f8d11c52016aff2b42d8383b7cd\"" Dec 13 01:58:28.218924 env[1215]: time="2024-12-13T01:58:28.218902430Z" level=info msg="RemoveContainer for \"7f529dfa0c8896851e273300fb565bb6113c0f8d11c52016aff2b42d8383b7cd\" returns successfully" Dec 13 01:58:28.219059 kubelet[1414]: I1213 01:58:28.219034 1414 scope.go:117] "RemoveContainer" containerID="db9e7b5ced64919f4e0e3de19665b4716c44c66798e3e291b9725eae7cd24419" Dec 13 01:58:28.219858 env[1215]: time="2024-12-13T01:58:28.219837307Z" level=info msg="RemoveContainer for \"db9e7b5ced64919f4e0e3de19665b4716c44c66798e3e291b9725eae7cd24419\"" Dec 13 01:58:28.222220 env[1215]: time="2024-12-13T01:58:28.222194278Z" level=info msg="RemoveContainer for \"db9e7b5ced64919f4e0e3de19665b4716c44c66798e3e291b9725eae7cd24419\" returns successfully" Dec 13 01:58:28.222316 kubelet[1414]: I1213 01:58:28.222301 1414 scope.go:117] "RemoveContainer" containerID="714567f9d6627484b725f72f11cfa24a765c629a1df59541d2d26259d08504d8" Dec 13 01:58:28.223100 env[1215]: time="2024-12-13T01:58:28.223076376Z" level=info msg="RemoveContainer for \"714567f9d6627484b725f72f11cfa24a765c629a1df59541d2d26259d08504d8\"" Dec 13 01:58:28.225421 env[1215]: time="2024-12-13T01:58:28.225389855Z" level=info msg="RemoveContainer for \"714567f9d6627484b725f72f11cfa24a765c629a1df59541d2d26259d08504d8\" returns successfully" Dec 13 01:58:28.225552 kubelet[1414]: I1213 01:58:28.225528 1414 scope.go:117] "RemoveContainer" containerID="8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09" Dec 13 01:58:28.226350 env[1215]: time="2024-12-13T01:58:28.226325343Z" level=info msg="RemoveContainer for \"8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09\"" Dec 13 01:58:28.229014 env[1215]: time="2024-12-13T01:58:28.228991014Z" level=info msg="RemoveContainer for \"8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09\" returns successfully" Dec 13 01:58:28.229148 kubelet[1414]: I1213 01:58:28.229134 1414 scope.go:117] "RemoveContainer" containerID="0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a" Dec 13 01:58:28.229411 env[1215]: time="2024-12-13T01:58:28.229346493Z" level=error msg="ContainerStatus for \"0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a\": not found" Dec 13 01:58:28.229538 kubelet[1414]: E1213 01:58:28.229515 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a\": not found" containerID="0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a" Dec 13 01:58:28.229616 kubelet[1414]: I1213 01:58:28.229556 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a"} err="failed to get container status \"0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a\": rpc error: code = NotFound desc = an error occurred when try to find container \"0b5bd16f9dc8bec177b89adcf25e1dc3e143de719c55371e412cdb488af6666a\": not found" Dec 13 01:58:28.229616 kubelet[1414]: I1213 01:58:28.229615 1414 scope.go:117] "RemoveContainer" containerID="7f529dfa0c8896851e273300fb565bb6113c0f8d11c52016aff2b42d8383b7cd" Dec 13 01:58:28.229807 env[1215]: time="2024-12-13T01:58:28.229764027Z" level=error msg="ContainerStatus for \"7f529dfa0c8896851e273300fb565bb6113c0f8d11c52016aff2b42d8383b7cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f529dfa0c8896851e273300fb565bb6113c0f8d11c52016aff2b42d8383b7cd\": not found" Dec 13 01:58:28.229908 kubelet[1414]: E1213 01:58:28.229887 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f529dfa0c8896851e273300fb565bb6113c0f8d11c52016aff2b42d8383b7cd\": not found" containerID="7f529dfa0c8896851e273300fb565bb6113c0f8d11c52016aff2b42d8383b7cd" Dec 13 01:58:28.229956 kubelet[1414]: I1213 01:58:28.229914 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f529dfa0c8896851e273300fb565bb6113c0f8d11c52016aff2b42d8383b7cd"} err="failed to get container status \"7f529dfa0c8896851e273300fb565bb6113c0f8d11c52016aff2b42d8383b7cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f529dfa0c8896851e273300fb565bb6113c0f8d11c52016aff2b42d8383b7cd\": not found" Dec 13 01:58:28.229956 kubelet[1414]: I1213 01:58:28.229931 1414 scope.go:117] "RemoveContainer" containerID="db9e7b5ced64919f4e0e3de19665b4716c44c66798e3e291b9725eae7cd24419" Dec 13 01:58:28.230122 env[1215]: time="2024-12-13T01:58:28.230081094Z" level=error msg="ContainerStatus for \"db9e7b5ced64919f4e0e3de19665b4716c44c66798e3e291b9725eae7cd24419\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db9e7b5ced64919f4e0e3de19665b4716c44c66798e3e291b9725eae7cd24419\": not found" Dec 13 01:58:28.230237 kubelet[1414]: E1213 01:58:28.230216 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db9e7b5ced64919f4e0e3de19665b4716c44c66798e3e291b9725eae7cd24419\": not found" containerID="db9e7b5ced64919f4e0e3de19665b4716c44c66798e3e291b9725eae7cd24419" Dec 13 01:58:28.230284 kubelet[1414]: I1213 01:58:28.230249 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"db9e7b5ced64919f4e0e3de19665b4716c44c66798e3e291b9725eae7cd24419"} err="failed to get container status \"db9e7b5ced64919f4e0e3de19665b4716c44c66798e3e291b9725eae7cd24419\": rpc error: code = NotFound desc = an error occurred when try to find container \"db9e7b5ced64919f4e0e3de19665b4716c44c66798e3e291b9725eae7cd24419\": not found" Dec 13 01:58:28.230284 kubelet[1414]: I1213 01:58:28.230265 1414 scope.go:117] "RemoveContainer" containerID="714567f9d6627484b725f72f11cfa24a765c629a1df59541d2d26259d08504d8" Dec 13 01:58:28.230434 env[1215]: time="2024-12-13T01:58:28.230396106Z" level=error msg="ContainerStatus for \"714567f9d6627484b725f72f11cfa24a765c629a1df59541d2d26259d08504d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"714567f9d6627484b725f72f11cfa24a765c629a1df59541d2d26259d08504d8\": not found" Dec 13 01:58:28.230527 kubelet[1414]: E1213 01:58:28.230510 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"714567f9d6627484b725f72f11cfa24a765c629a1df59541d2d26259d08504d8\": not found" containerID="714567f9d6627484b725f72f11cfa24a765c629a1df59541d2d26259d08504d8" Dec 13 01:58:28.230557 kubelet[1414]: I1213 01:58:28.230533 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"714567f9d6627484b725f72f11cfa24a765c629a1df59541d2d26259d08504d8"} err="failed to get container status \"714567f9d6627484b725f72f11cfa24a765c629a1df59541d2d26259d08504d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"714567f9d6627484b725f72f11cfa24a765c629a1df59541d2d26259d08504d8\": not found" Dec 13 01:58:28.230557 kubelet[1414]: I1213 01:58:28.230548 1414 scope.go:117] "RemoveContainer" containerID="8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09" Dec 13 01:58:28.230735 env[1215]: time="2024-12-13T01:58:28.230698754Z" level=error msg="ContainerStatus for \"8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09\": not found" Dec 13 01:58:28.230805 kubelet[1414]: E1213 01:58:28.230788 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09\": not found" containerID="8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09" Dec 13 01:58:28.230867 kubelet[1414]: I1213 01:58:28.230807 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09"} err="failed to get container status \"8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f7f9c4f99345d19fbe0afc166eefb03273d5834f1f31048d7d92dbfd8610c09\": not found" Dec 13 01:58:28.668078 kubelet[1414]: E1213 01:58:28.668023 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:29.081935 kubelet[1414]: I1213 01:58:29.081801 1414 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" path="/var/lib/kubelet/pods/5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07/volumes" Dec 13 01:58:29.234242 kubelet[1414]: E1213 01:58:29.234199 1414 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" containerName="apply-sysctl-overwrites" Dec 13 01:58:29.234242 kubelet[1414]: E1213 01:58:29.234217 1414 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" containerName="clean-cilium-state" Dec 13 01:58:29.234242 kubelet[1414]: E1213 01:58:29.234224 1414 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" containerName="mount-cgroup" Dec 13 01:58:29.234242 kubelet[1414]: E1213 01:58:29.234228 1414 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" containerName="mount-bpf-fs" Dec 13 01:58:29.234242 kubelet[1414]: E1213 01:58:29.234233 1414 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" containerName="cilium-agent" Dec 13 01:58:29.234242 kubelet[1414]: I1213 01:58:29.234251 1414 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dc25ab5-ed19-4c39-a96a-c64c0bbd1e07" containerName="cilium-agent" Dec 13 01:58:29.238780 systemd[1]: Created slice kubepods-burstable-pod29324129_6b15_4df3_9242_437f058f1ed8.slice. Dec 13 01:58:29.248316 systemd[1]: Created slice kubepods-besteffort-podb59eba9c_8601_4f24_8763_b17661750d93.slice. Dec 13 01:58:29.393584 kubelet[1414]: E1213 01:58:29.393408 1414 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-6pprl lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-qrd9h" podUID="29324129-6b15-4df3-9242-437f058f1ed8" Dec 13 01:58:29.419303 kubelet[1414]: I1213 01:58:29.419224 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/29324129-6b15-4df3-9242-437f058f1ed8-hubble-tls\") pod \"cilium-qrd9h\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " pod="kube-system/cilium-qrd9h" Dec 13 01:58:29.419303 kubelet[1414]: I1213 01:58:29.419274 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-cilium-cgroup\") pod \"cilium-qrd9h\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " pod="kube-system/cilium-qrd9h" Dec 13 01:58:29.419303 kubelet[1414]: I1213 01:58:29.419294 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-lib-modules\") pod \"cilium-qrd9h\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " pod="kube-system/cilium-qrd9h" Dec 13 01:58:29.419303 kubelet[1414]: I1213 01:58:29.419313 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-host-proc-sys-kernel\") pod \"cilium-qrd9h\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " pod="kube-system/cilium-qrd9h" Dec 13 01:58:29.419303 kubelet[1414]: I1213 01:58:29.419327 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29324129-6b15-4df3-9242-437f058f1ed8-cilium-config-path\") pod \"cilium-qrd9h\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " pod="kube-system/cilium-qrd9h" Dec 13 01:58:29.419682 kubelet[1414]: I1213 01:58:29.419340 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-host-proc-sys-net\") pod \"cilium-qrd9h\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " pod="kube-system/cilium-qrd9h" Dec 13 01:58:29.419682 kubelet[1414]: I1213 01:58:29.419358 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-cilium-run\") pod \"cilium-qrd9h\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " pod="kube-system/cilium-qrd9h" Dec 13 01:58:29.419682 kubelet[1414]: I1213 01:58:29.419385 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-cni-path\") pod \"cilium-qrd9h\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " pod="kube-system/cilium-qrd9h" Dec 13 01:58:29.419682 kubelet[1414]: I1213 01:58:29.419400 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-xtables-lock\") pod \"cilium-qrd9h\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " pod="kube-system/cilium-qrd9h" Dec 13 01:58:29.419682 kubelet[1414]: I1213 01:58:29.419417 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b59eba9c-8601-4f24-8763-b17661750d93-cilium-config-path\") pod \"cilium-operator-5d85765b45-r25xm\" (UID: \"b59eba9c-8601-4f24-8763-b17661750d93\") " pod="kube-system/cilium-operator-5d85765b45-r25xm" Dec 13 01:58:29.419852 kubelet[1414]: I1213 01:58:29.419432 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-hostproc\") pod \"cilium-qrd9h\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " pod="kube-system/cilium-qrd9h" Dec 13 01:58:29.419852 kubelet[1414]: I1213 01:58:29.419444 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/29324129-6b15-4df3-9242-437f058f1ed8-clustermesh-secrets\") pod \"cilium-qrd9h\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " pod="kube-system/cilium-qrd9h" Dec 13 01:58:29.419852 kubelet[1414]: I1213 01:58:29.419468 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cmxk\" (UniqueName: \"kubernetes.io/projected/b59eba9c-8601-4f24-8763-b17661750d93-kube-api-access-5cmxk\") pod \"cilium-operator-5d85765b45-r25xm\" (UID: \"b59eba9c-8601-4f24-8763-b17661750d93\") " pod="kube-system/cilium-operator-5d85765b45-r25xm" Dec 13 01:58:29.419852 kubelet[1414]: I1213 01:58:29.419510 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-bpf-maps\") pod \"cilium-qrd9h\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " pod="kube-system/cilium-qrd9h" Dec 13 01:58:29.419852 kubelet[1414]: I1213 01:58:29.419527 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-etc-cni-netd\") pod \"cilium-qrd9h\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " pod="kube-system/cilium-qrd9h" Dec 13 01:58:29.420063 kubelet[1414]: I1213 01:58:29.419544 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/29324129-6b15-4df3-9242-437f058f1ed8-cilium-ipsec-secrets\") pod \"cilium-qrd9h\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " pod="kube-system/cilium-qrd9h" Dec 13 01:58:29.420063 kubelet[1414]: I1213 01:58:29.419557 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pprl\" (UniqueName: \"kubernetes.io/projected/29324129-6b15-4df3-9242-437f058f1ed8-kube-api-access-6pprl\") pod \"cilium-qrd9h\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " pod="kube-system/cilium-qrd9h" Dec 13 01:58:29.551080 kubelet[1414]: E1213 01:58:29.551028 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:29.551635 env[1215]: time="2024-12-13T01:58:29.551595654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-r25xm,Uid:b59eba9c-8601-4f24-8763-b17661750d93,Namespace:kube-system,Attempt:0,}" Dec 13 01:58:29.668672 kubelet[1414]: E1213 01:58:29.668605 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:29.818016 env[1215]: time="2024-12-13T01:58:29.817941542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:58:29.818155 env[1215]: time="2024-12-13T01:58:29.817998909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:58:29.818155 env[1215]: time="2024-12-13T01:58:29.818019247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:58:29.818304 env[1215]: time="2024-12-13T01:58:29.818270299Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f70ad9a0cbb3f2ba6f56b21eb611c851fd8ac524ccf876c964e227a05fdb5c3 pid=2998 runtime=io.containerd.runc.v2 Dec 13 01:58:29.831819 systemd[1]: Started cri-containerd-6f70ad9a0cbb3f2ba6f56b21eb611c851fd8ac524ccf876c964e227a05fdb5c3.scope. Dec 13 01:58:29.863927 env[1215]: time="2024-12-13T01:58:29.863884500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-r25xm,Uid:b59eba9c-8601-4f24-8763-b17661750d93,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f70ad9a0cbb3f2ba6f56b21eb611c851fd8ac524ccf876c964e227a05fdb5c3\"" Dec 13 01:58:29.864549 kubelet[1414]: E1213 01:58:29.864529 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:29.865373 env[1215]: time="2024-12-13T01:58:29.865345286Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:58:30.040647 kubelet[1414]: E1213 01:58:30.040496 1414 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:58:30.326904 kubelet[1414]: I1213 01:58:30.326776 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-cni-path\") pod \"29324129-6b15-4df3-9242-437f058f1ed8\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " Dec 13 01:58:30.326904 kubelet[1414]: I1213 01:58:30.326813 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-etc-cni-netd\") pod \"29324129-6b15-4df3-9242-437f058f1ed8\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " Dec 13 01:58:30.326904 kubelet[1414]: I1213 01:58:30.326835 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pprl\" (UniqueName: \"kubernetes.io/projected/29324129-6b15-4df3-9242-437f058f1ed8-kube-api-access-6pprl\") pod \"29324129-6b15-4df3-9242-437f058f1ed8\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " Dec 13 01:58:30.326904 kubelet[1414]: I1213 01:58:30.326850 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-cni-path" (OuterVolumeSpecName: "cni-path") pod "29324129-6b15-4df3-9242-437f058f1ed8" (UID: "29324129-6b15-4df3-9242-437f058f1ed8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:30.326904 kubelet[1414]: I1213 01:58:30.326871 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-host-proc-sys-net\") pod \"29324129-6b15-4df3-9242-437f058f1ed8\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " Dec 13 01:58:30.326904 kubelet[1414]: I1213 01:58:30.326882 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "29324129-6b15-4df3-9242-437f058f1ed8" (UID: "29324129-6b15-4df3-9242-437f058f1ed8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:30.327184 kubelet[1414]: I1213 01:58:30.326886 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-lib-modules\") pod \"29324129-6b15-4df3-9242-437f058f1ed8\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " Dec 13 01:58:30.327184 kubelet[1414]: I1213 01:58:30.326906 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-hostproc\") pod \"29324129-6b15-4df3-9242-437f058f1ed8\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " Dec 13 01:58:30.327184 kubelet[1414]: I1213 01:58:30.326922 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/29324129-6b15-4df3-9242-437f058f1ed8-clustermesh-secrets\") pod \"29324129-6b15-4df3-9242-437f058f1ed8\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " Dec 13 01:58:30.327184 kubelet[1414]: I1213 01:58:30.326940 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/29324129-6b15-4df3-9242-437f058f1ed8-cilium-ipsec-secrets\") pod \"29324129-6b15-4df3-9242-437f058f1ed8\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " Dec 13 01:58:30.327184 kubelet[1414]: I1213 01:58:30.326955 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-cilium-cgroup\") pod \"29324129-6b15-4df3-9242-437f058f1ed8\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " Dec 13 01:58:30.327184 kubelet[1414]: I1213 01:58:30.326967 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-host-proc-sys-kernel\") pod \"29324129-6b15-4df3-9242-437f058f1ed8\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " Dec 13 01:58:30.327318 kubelet[1414]: I1213 01:58:30.326996 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-cilium-run\") pod \"29324129-6b15-4df3-9242-437f058f1ed8\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " Dec 13 01:58:30.327318 kubelet[1414]: I1213 01:58:30.327009 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-xtables-lock\") pod \"29324129-6b15-4df3-9242-437f058f1ed8\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " Dec 13 01:58:30.327318 kubelet[1414]: I1213 01:58:30.327021 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-bpf-maps\") pod \"29324129-6b15-4df3-9242-437f058f1ed8\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " Dec 13 01:58:30.327318 kubelet[1414]: I1213 01:58:30.327036 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/29324129-6b15-4df3-9242-437f058f1ed8-hubble-tls\") pod \"29324129-6b15-4df3-9242-437f058f1ed8\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " Dec 13 01:58:30.327318 kubelet[1414]: I1213 01:58:30.327051 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29324129-6b15-4df3-9242-437f058f1ed8-cilium-config-path\") pod \"29324129-6b15-4df3-9242-437f058f1ed8\" (UID: \"29324129-6b15-4df3-9242-437f058f1ed8\") " Dec 13 01:58:30.327318 kubelet[1414]: I1213 01:58:30.327099 1414 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-cni-path\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:30.327318 kubelet[1414]: I1213 01:58:30.327110 1414 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-etc-cni-netd\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:30.327619 kubelet[1414]: I1213 01:58:30.327186 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "29324129-6b15-4df3-9242-437f058f1ed8" (UID: "29324129-6b15-4df3-9242-437f058f1ed8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:30.327619 kubelet[1414]: I1213 01:58:30.327246 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "29324129-6b15-4df3-9242-437f058f1ed8" (UID: "29324129-6b15-4df3-9242-437f058f1ed8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:30.327619 kubelet[1414]: I1213 01:58:30.327265 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "29324129-6b15-4df3-9242-437f058f1ed8" (UID: "29324129-6b15-4df3-9242-437f058f1ed8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:30.327619 kubelet[1414]: I1213 01:58:30.327278 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-hostproc" (OuterVolumeSpecName: "hostproc") pod "29324129-6b15-4df3-9242-437f058f1ed8" (UID: "29324129-6b15-4df3-9242-437f058f1ed8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:30.327742 kubelet[1414]: I1213 01:58:30.327710 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "29324129-6b15-4df3-9242-437f058f1ed8" (UID: "29324129-6b15-4df3-9242-437f058f1ed8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:30.327792 kubelet[1414]: I1213 01:58:30.327759 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "29324129-6b15-4df3-9242-437f058f1ed8" (UID: "29324129-6b15-4df3-9242-437f058f1ed8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:30.327819 kubelet[1414]: I1213 01:58:30.327792 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "29324129-6b15-4df3-9242-437f058f1ed8" (UID: "29324129-6b15-4df3-9242-437f058f1ed8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:30.327937 kubelet[1414]: I1213 01:58:30.327899 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "29324129-6b15-4df3-9242-437f058f1ed8" (UID: "29324129-6b15-4df3-9242-437f058f1ed8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:30.328697 kubelet[1414]: I1213 01:58:30.328667 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29324129-6b15-4df3-9242-437f058f1ed8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "29324129-6b15-4df3-9242-437f058f1ed8" (UID: "29324129-6b15-4df3-9242-437f058f1ed8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:58:30.329228 kubelet[1414]: I1213 01:58:30.329201 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29324129-6b15-4df3-9242-437f058f1ed8-kube-api-access-6pprl" (OuterVolumeSpecName: "kube-api-access-6pprl") pod "29324129-6b15-4df3-9242-437f058f1ed8" (UID: "29324129-6b15-4df3-9242-437f058f1ed8"). InnerVolumeSpecName "kube-api-access-6pprl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:58:30.330604 kubelet[1414]: I1213 01:58:30.330562 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29324129-6b15-4df3-9242-437f058f1ed8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "29324129-6b15-4df3-9242-437f058f1ed8" (UID: "29324129-6b15-4df3-9242-437f058f1ed8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:58:30.330793 kubelet[1414]: I1213 01:58:30.330754 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29324129-6b15-4df3-9242-437f058f1ed8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "29324129-6b15-4df3-9242-437f058f1ed8" (UID: "29324129-6b15-4df3-9242-437f058f1ed8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:58:30.330933 kubelet[1414]: I1213 01:58:30.330823 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29324129-6b15-4df3-9242-437f058f1ed8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "29324129-6b15-4df3-9242-437f058f1ed8" (UID: "29324129-6b15-4df3-9242-437f058f1ed8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:58:30.428110 kubelet[1414]: I1213 01:58:30.428079 1414 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6pprl\" (UniqueName: \"kubernetes.io/projected/29324129-6b15-4df3-9242-437f058f1ed8-kube-api-access-6pprl\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:30.428110 kubelet[1414]: I1213 01:58:30.428111 1414 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-host-proc-sys-net\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:30.428278 kubelet[1414]: I1213 01:58:30.428125 1414 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-lib-modules\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:30.428278 kubelet[1414]: I1213 01:58:30.428135 1414 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-cilium-cgroup\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:30.428278 kubelet[1414]: I1213 01:58:30.428144 1414 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-hostproc\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:30.428278 kubelet[1414]: I1213 01:58:30.428153 1414 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/29324129-6b15-4df3-9242-437f058f1ed8-clustermesh-secrets\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:30.428278 kubelet[1414]: I1213 01:58:30.428161 1414 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/29324129-6b15-4df3-9242-437f058f1ed8-cilium-ipsec-secrets\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:30.428278 kubelet[1414]: I1213 01:58:30.428170 1414 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-xtables-lock\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:30.428278 kubelet[1414]: I1213 01:58:30.428179 1414 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-bpf-maps\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:30.428278 kubelet[1414]: I1213 01:58:30.428187 1414 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/29324129-6b15-4df3-9242-437f058f1ed8-hubble-tls\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:30.428497 kubelet[1414]: I1213 01:58:30.428198 1414 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29324129-6b15-4df3-9242-437f058f1ed8-cilium-config-path\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:30.428497 kubelet[1414]: I1213 01:58:30.428208 1414 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-host-proc-sys-kernel\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:30.428497 kubelet[1414]: I1213 01:58:30.428217 1414 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/29324129-6b15-4df3-9242-437f058f1ed8-cilium-run\") on node \"10.0.0.123\" DevicePath \"\"" Dec 13 01:58:30.525778 systemd[1]: var-lib-kubelet-pods-29324129\x2d6b15\x2d4df3\x2d9242\x2d437f058f1ed8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6pprl.mount: Deactivated successfully. Dec 13 01:58:30.525871 systemd[1]: var-lib-kubelet-pods-29324129\x2d6b15\x2d4df3\x2d9242\x2d437f058f1ed8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:58:30.525923 systemd[1]: var-lib-kubelet-pods-29324129\x2d6b15\x2d4df3\x2d9242\x2d437f058f1ed8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 01:58:30.525986 systemd[1]: var-lib-kubelet-pods-29324129\x2d6b15\x2d4df3\x2d9242\x2d437f058f1ed8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:58:30.669604 kubelet[1414]: E1213 01:58:30.669550 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:31.084389 systemd[1]: Removed slice kubepods-burstable-pod29324129_6b15_4df3_9242_437f058f1ed8.slice. Dec 13 01:58:31.249371 kubelet[1414]: W1213 01:58:31.249335 1414 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.0.0.123" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.123' and this object Dec 13 01:58:31.249542 kubelet[1414]: E1213 01:58:31.249396 1414 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:10.0.0.123\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.123' and this object" logger="UnhandledError" Dec 13 01:58:31.250005 kubelet[1414]: W1213 01:58:31.249942 1414 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.0.0.123" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.123' and this object Dec 13 01:58:31.250005 kubelet[1414]: E1213 01:58:31.249994 1414 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:10.0.0.123\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.123' and this object" logger="UnhandledError" Dec 13 01:58:31.251268 kubelet[1414]: W1213 01:58:31.251251 1414 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.0.0.123" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.123' and this object Dec 13 01:58:31.251388 kubelet[1414]: E1213 01:58:31.251353 1414 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:10.0.0.123\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.123' and this object" logger="UnhandledError" Dec 13 01:58:31.253152 systemd[1]: Created slice kubepods-burstable-pod5c10f53f_4f61_489d_9b16_4ca4fc6a299c.slice. Dec 13 01:58:31.432148 kubelet[1414]: I1213 01:58:31.432098 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5c10f53f-4f61-489d-9b16-4ca4fc6a299c-cilium-ipsec-secrets\") pod \"cilium-s9xpl\" (UID: \"5c10f53f-4f61-489d-9b16-4ca4fc6a299c\") " pod="kube-system/cilium-s9xpl" Dec 13 01:58:31.432148 kubelet[1414]: I1213 01:58:31.432136 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c10f53f-4f61-489d-9b16-4ca4fc6a299c-host-proc-sys-kernel\") pod \"cilium-s9xpl\" (UID: \"5c10f53f-4f61-489d-9b16-4ca4fc6a299c\") " pod="kube-system/cilium-s9xpl" Dec 13 01:58:31.432148 kubelet[1414]: I1213 01:58:31.432152 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c10f53f-4f61-489d-9b16-4ca4fc6a299c-etc-cni-netd\") pod \"cilium-s9xpl\" (UID: \"5c10f53f-4f61-489d-9b16-4ca4fc6a299c\") " pod="kube-system/cilium-s9xpl" Dec 13 01:58:31.432148 kubelet[1414]: I1213 01:58:31.432165 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c10f53f-4f61-489d-9b16-4ca4fc6a299c-lib-modules\") pod \"cilium-s9xpl\" (UID: \"5c10f53f-4f61-489d-9b16-4ca4fc6a299c\") " pod="kube-system/cilium-s9xpl" Dec 13 01:58:31.432424 kubelet[1414]: I1213 01:58:31.432178 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c10f53f-4f61-489d-9b16-4ca4fc6a299c-xtables-lock\") pod \"cilium-s9xpl\" (UID: \"5c10f53f-4f61-489d-9b16-4ca4fc6a299c\") " pod="kube-system/cilium-s9xpl" Dec 13 01:58:31.432424 kubelet[1414]: I1213 01:58:31.432191 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c10f53f-4f61-489d-9b16-4ca4fc6a299c-cilium-config-path\") pod \"cilium-s9xpl\" (UID: \"5c10f53f-4f61-489d-9b16-4ca4fc6a299c\") " pod="kube-system/cilium-s9xpl" Dec 13 01:58:31.432424 kubelet[1414]: I1213 01:58:31.432206 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c10f53f-4f61-489d-9b16-4ca4fc6a299c-host-proc-sys-net\") pod \"cilium-s9xpl\" (UID: \"5c10f53f-4f61-489d-9b16-4ca4fc6a299c\") " pod="kube-system/cilium-s9xpl" Dec 13 01:58:31.432424 kubelet[1414]: I1213 01:58:31.432224 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c10f53f-4f61-489d-9b16-4ca4fc6a299c-clustermesh-secrets\") pod \"cilium-s9xpl\" (UID: \"5c10f53f-4f61-489d-9b16-4ca4fc6a299c\") " pod="kube-system/cilium-s9xpl" Dec 13 01:58:31.432424 kubelet[1414]: I1213 01:58:31.432243 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c10f53f-4f61-489d-9b16-4ca4fc6a299c-hubble-tls\") pod \"cilium-s9xpl\" (UID: \"5c10f53f-4f61-489d-9b16-4ca4fc6a299c\") " pod="kube-system/cilium-s9xpl" Dec 13 01:58:31.432424 kubelet[1414]: I1213 01:58:31.432259 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c10f53f-4f61-489d-9b16-4ca4fc6a299c-bpf-maps\") pod \"cilium-s9xpl\" (UID: \"5c10f53f-4f61-489d-9b16-4ca4fc6a299c\") " pod="kube-system/cilium-s9xpl" Dec 13 01:58:31.432568 kubelet[1414]: I1213 01:58:31.432271 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c10f53f-4f61-489d-9b16-4ca4fc6a299c-cilium-cgroup\") pod \"cilium-s9xpl\" (UID: \"5c10f53f-4f61-489d-9b16-4ca4fc6a299c\") " pod="kube-system/cilium-s9xpl" Dec 13 01:58:31.432568 kubelet[1414]: I1213 01:58:31.432287 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c10f53f-4f61-489d-9b16-4ca4fc6a299c-cni-path\") pod \"cilium-s9xpl\" (UID: \"5c10f53f-4f61-489d-9b16-4ca4fc6a299c\") " pod="kube-system/cilium-s9xpl" Dec 13 01:58:31.432568 kubelet[1414]: I1213 01:58:31.432300 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c10f53f-4f61-489d-9b16-4ca4fc6a299c-cilium-run\") pod \"cilium-s9xpl\" (UID: \"5c10f53f-4f61-489d-9b16-4ca4fc6a299c\") " pod="kube-system/cilium-s9xpl" Dec 13 01:58:31.432568 kubelet[1414]: I1213 01:58:31.432313 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c10f53f-4f61-489d-9b16-4ca4fc6a299c-hostproc\") pod \"cilium-s9xpl\" (UID: \"5c10f53f-4f61-489d-9b16-4ca4fc6a299c\") " pod="kube-system/cilium-s9xpl" Dec 13 01:58:31.432568 kubelet[1414]: I1213 01:58:31.432383 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grddq\" (UniqueName: \"kubernetes.io/projected/5c10f53f-4f61-489d-9b16-4ca4fc6a299c-kube-api-access-grddq\") pod \"cilium-s9xpl\" (UID: \"5c10f53f-4f61-489d-9b16-4ca4fc6a299c\") " pod="kube-system/cilium-s9xpl" Dec 13 01:58:31.670230 kubelet[1414]: E1213 01:58:31.670173 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:32.534407 kubelet[1414]: E1213 01:58:32.534335 1414 secret.go:188] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Dec 13 01:58:32.534573 kubelet[1414]: E1213 01:58:32.534449 1414 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c10f53f-4f61-489d-9b16-4ca4fc6a299c-cilium-ipsec-secrets podName:5c10f53f-4f61-489d-9b16-4ca4fc6a299c nodeName:}" failed. No retries permitted until 2024-12-13 01:58:33.034427151 +0000 UTC m=+68.640624679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/5c10f53f-4f61-489d-9b16-4ca4fc6a299c-cilium-ipsec-secrets") pod "cilium-s9xpl" (UID: "5c10f53f-4f61-489d-9b16-4ca4fc6a299c") : failed to sync secret cache: timed out waiting for the condition Dec 13 01:58:32.670737 kubelet[1414]: E1213 01:58:32.670683 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:33.060516 kubelet[1414]: E1213 01:58:33.060476 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:33.060958 env[1215]: time="2024-12-13T01:58:33.060911102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s9xpl,Uid:5c10f53f-4f61-489d-9b16-4ca4fc6a299c,Namespace:kube-system,Attempt:0,}" Dec 13 01:58:33.075891 env[1215]: time="2024-12-13T01:58:33.075816494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:58:33.075891 env[1215]: time="2024-12-13T01:58:33.075855017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:58:33.075891 env[1215]: time="2024-12-13T01:58:33.075865146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:58:33.076188 env[1215]: time="2024-12-13T01:58:33.076090308Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a68ba838d461ddc6ddeb65091261f88ba9b0688c59421d0e6cf1a683ade851c pid=3046 runtime=io.containerd.runc.v2 Dec 13 01:58:33.081772 kubelet[1414]: I1213 01:58:33.081667 1414 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29324129-6b15-4df3-9242-437f058f1ed8" path="/var/lib/kubelet/pods/29324129-6b15-4df3-9242-437f058f1ed8/volumes" Dec 13 01:58:33.090384 systemd[1]: Started cri-containerd-7a68ba838d461ddc6ddeb65091261f88ba9b0688c59421d0e6cf1a683ade851c.scope. Dec 13 01:58:33.107039 env[1215]: time="2024-12-13T01:58:33.106671158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s9xpl,Uid:5c10f53f-4f61-489d-9b16-4ca4fc6a299c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a68ba838d461ddc6ddeb65091261f88ba9b0688c59421d0e6cf1a683ade851c\"" Dec 13 01:58:33.107196 kubelet[1414]: E1213 01:58:33.107175 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:33.108993 env[1215]: time="2024-12-13T01:58:33.108941754Z" level=info msg="CreateContainer within sandbox \"7a68ba838d461ddc6ddeb65091261f88ba9b0688c59421d0e6cf1a683ade851c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:58:33.120380 env[1215]: time="2024-12-13T01:58:33.120328174Z" level=info msg="CreateContainer within sandbox \"7a68ba838d461ddc6ddeb65091261f88ba9b0688c59421d0e6cf1a683ade851c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"89b51c69d048596ce02dc55556545f878b8bd1f861a81dcf0de802c515c5edf2\"" Dec 13 01:58:33.120850 env[1215]: time="2024-12-13T01:58:33.120818164Z" level=info msg="StartContainer for \"89b51c69d048596ce02dc55556545f878b8bd1f861a81dcf0de802c515c5edf2\"" Dec 13 01:58:33.133478 systemd[1]: Started cri-containerd-89b51c69d048596ce02dc55556545f878b8bd1f861a81dcf0de802c515c5edf2.scope. Dec 13 01:58:33.157276 env[1215]: time="2024-12-13T01:58:33.157208478Z" level=info msg="StartContainer for \"89b51c69d048596ce02dc55556545f878b8bd1f861a81dcf0de802c515c5edf2\" returns successfully" Dec 13 01:58:33.161918 systemd[1]: cri-containerd-89b51c69d048596ce02dc55556545f878b8bd1f861a81dcf0de802c515c5edf2.scope: Deactivated successfully. Dec 13 01:58:33.190076 env[1215]: time="2024-12-13T01:58:33.190008767Z" level=info msg="shim disconnected" id=89b51c69d048596ce02dc55556545f878b8bd1f861a81dcf0de802c515c5edf2 Dec 13 01:58:33.190076 env[1215]: time="2024-12-13T01:58:33.190055595Z" level=warning msg="cleaning up after shim disconnected" id=89b51c69d048596ce02dc55556545f878b8bd1f861a81dcf0de802c515c5edf2 namespace=k8s.io Dec 13 01:58:33.190076 env[1215]: time="2024-12-13T01:58:33.190064802Z" level=info msg="cleaning up dead shim" Dec 13 01:58:33.196113 env[1215]: time="2024-12-13T01:58:33.196089250Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:58:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3132 runtime=io.containerd.runc.v2\n" Dec 13 01:58:33.220570 kubelet[1414]: E1213 01:58:33.220540 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:33.222411 env[1215]: time="2024-12-13T01:58:33.222342540Z" level=info msg="CreateContainer within sandbox \"7a68ba838d461ddc6ddeb65091261f88ba9b0688c59421d0e6cf1a683ade851c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:58:33.234492 env[1215]: time="2024-12-13T01:58:33.234421892Z" level=info msg="CreateContainer within sandbox \"7a68ba838d461ddc6ddeb65091261f88ba9b0688c59421d0e6cf1a683ade851c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"35af873100ca6c552c652fe1d7757438950ee6ce63f8e7826cdd91899097ce8c\"" Dec 13 01:58:33.234949 env[1215]: time="2024-12-13T01:58:33.234926370Z" level=info msg="StartContainer for \"35af873100ca6c552c652fe1d7757438950ee6ce63f8e7826cdd91899097ce8c\"" Dec 13 01:58:33.248432 systemd[1]: Started cri-containerd-35af873100ca6c552c652fe1d7757438950ee6ce63f8e7826cdd91899097ce8c.scope. Dec 13 01:58:33.270106 env[1215]: time="2024-12-13T01:58:33.270055665Z" level=info msg="StartContainer for \"35af873100ca6c552c652fe1d7757438950ee6ce63f8e7826cdd91899097ce8c\" returns successfully" Dec 13 01:58:33.275415 systemd[1]: cri-containerd-35af873100ca6c552c652fe1d7757438950ee6ce63f8e7826cdd91899097ce8c.scope: Deactivated successfully. Dec 13 01:58:33.295011 env[1215]: time="2024-12-13T01:58:33.294929733Z" level=info msg="shim disconnected" id=35af873100ca6c552c652fe1d7757438950ee6ce63f8e7826cdd91899097ce8c Dec 13 01:58:33.295011 env[1215]: time="2024-12-13T01:58:33.295010715Z" level=warning msg="cleaning up after shim disconnected" id=35af873100ca6c552c652fe1d7757438950ee6ce63f8e7826cdd91899097ce8c namespace=k8s.io Dec 13 01:58:33.295212 env[1215]: time="2024-12-13T01:58:33.295024791Z" level=info msg="cleaning up dead shim" Dec 13 01:58:33.301108 env[1215]: time="2024-12-13T01:58:33.301044220Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:58:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3194 runtime=io.containerd.runc.v2\n" Dec 13 01:58:33.670993 kubelet[1414]: E1213 01:58:33.670930 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:34.079516 kubelet[1414]: E1213 01:58:34.079406 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:34.117225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2771045392.mount: Deactivated successfully. Dec 13 01:58:34.224255 kubelet[1414]: E1213 01:58:34.223900 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:34.225334 env[1215]: time="2024-12-13T01:58:34.225294458Z" level=info msg="CreateContainer within sandbox \"7a68ba838d461ddc6ddeb65091261f88ba9b0688c59421d0e6cf1a683ade851c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:58:34.242506 env[1215]: time="2024-12-13T01:58:34.242448982Z" level=info msg="CreateContainer within sandbox \"7a68ba838d461ddc6ddeb65091261f88ba9b0688c59421d0e6cf1a683ade851c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e1221d18b09520e771a5997dd9aabc06966ec1432c7d692f10ff76b637857a07\"" Dec 13 01:58:34.243024 env[1215]: time="2024-12-13T01:58:34.242961234Z" level=info msg="StartContainer for \"e1221d18b09520e771a5997dd9aabc06966ec1432c7d692f10ff76b637857a07\"" Dec 13 01:58:34.256961 systemd[1]: Started cri-containerd-e1221d18b09520e771a5997dd9aabc06966ec1432c7d692f10ff76b637857a07.scope. Dec 13 01:58:34.282036 env[1215]: time="2024-12-13T01:58:34.281998493Z" level=info msg="StartContainer for \"e1221d18b09520e771a5997dd9aabc06966ec1432c7d692f10ff76b637857a07\" returns successfully" Dec 13 01:58:34.282905 systemd[1]: cri-containerd-e1221d18b09520e771a5997dd9aabc06966ec1432c7d692f10ff76b637857a07.scope: Deactivated successfully. Dec 13 01:58:34.327901 env[1215]: time="2024-12-13T01:58:34.327858027Z" level=info msg="shim disconnected" id=e1221d18b09520e771a5997dd9aabc06966ec1432c7d692f10ff76b637857a07 Dec 13 01:58:34.328149 env[1215]: time="2024-12-13T01:58:34.328107977Z" level=warning msg="cleaning up after shim disconnected" id=e1221d18b09520e771a5997dd9aabc06966ec1432c7d692f10ff76b637857a07 namespace=k8s.io Dec 13 01:58:34.328149 env[1215]: time="2024-12-13T01:58:34.328129457Z" level=info msg="cleaning up dead shim" Dec 13 01:58:34.333957 env[1215]: time="2024-12-13T01:58:34.333865672Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:58:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3251 runtime=io.containerd.runc.v2\n" Dec 13 01:58:34.671353 kubelet[1414]: E1213 01:58:34.671298 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:34.801008 env[1215]: time="2024-12-13T01:58:34.800944268Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:34.803077 env[1215]: time="2024-12-13T01:58:34.803023363Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:34.805908 env[1215]: time="2024-12-13T01:58:34.805877134Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:34.806253 env[1215]: time="2024-12-13T01:58:34.806218956Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 01:58:34.808144 env[1215]: time="2024-12-13T01:58:34.808114857Z" level=info msg="CreateContainer within sandbox \"6f70ad9a0cbb3f2ba6f56b21eb611c851fd8ac524ccf876c964e227a05fdb5c3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:58:34.821040 env[1215]: time="2024-12-13T01:58:34.821005020Z" level=info msg="CreateContainer within sandbox \"6f70ad9a0cbb3f2ba6f56b21eb611c851fd8ac524ccf876c964e227a05fdb5c3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ecc94a8196f54ef52dfb80077941082ceed634d0a4c176937066844e0c72a1f9\"" Dec 13 01:58:34.821385 env[1215]: time="2024-12-13T01:58:34.821336573Z" level=info msg="StartContainer for \"ecc94a8196f54ef52dfb80077941082ceed634d0a4c176937066844e0c72a1f9\"" Dec 13 01:58:34.832908 systemd[1]: Started cri-containerd-ecc94a8196f54ef52dfb80077941082ceed634d0a4c176937066844e0c72a1f9.scope. Dec 13 01:58:34.855922 env[1215]: time="2024-12-13T01:58:34.855872075Z" level=info msg="StartContainer for \"ecc94a8196f54ef52dfb80077941082ceed634d0a4c176937066844e0c72a1f9\" returns successfully" Dec 13 01:58:35.041109 kubelet[1414]: E1213 01:58:35.040989 1414 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:58:35.226238 kubelet[1414]: E1213 01:58:35.226208 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:35.227790 kubelet[1414]: E1213 01:58:35.227764 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:35.229217 env[1215]: time="2024-12-13T01:58:35.229174553Z" level=info msg="CreateContainer within sandbox \"7a68ba838d461ddc6ddeb65091261f88ba9b0688c59421d0e6cf1a683ade851c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:58:35.233751 kubelet[1414]: I1213 01:58:35.233700 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-r25xm" podStartSLOduration=1.291604448 podStartE2EDuration="6.233688852s" podCreationTimestamp="2024-12-13 01:58:29 +0000 UTC" firstStartedPulling="2024-12-13 01:58:29.864986802 +0000 UTC m=+65.471184320" lastFinishedPulling="2024-12-13 01:58:34.807071196 +0000 UTC m=+70.413268724" observedRunningTime="2024-12-13 01:58:35.233503695 +0000 UTC m=+70.839701223" watchObservedRunningTime="2024-12-13 01:58:35.233688852 +0000 UTC m=+70.839886380" Dec 13 01:58:35.242839 env[1215]: time="2024-12-13T01:58:35.242797039Z" level=info msg="CreateContainer within sandbox \"7a68ba838d461ddc6ddeb65091261f88ba9b0688c59421d0e6cf1a683ade851c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fcc081e879c4347cfb3daef43c78a10008345986aa33c277f1d3e82940bbd352\"" Dec 13 01:58:35.243208 env[1215]: time="2024-12-13T01:58:35.243188254Z" level=info msg="StartContainer for \"fcc081e879c4347cfb3daef43c78a10008345986aa33c277f1d3e82940bbd352\"" Dec 13 01:58:35.257419 systemd[1]: Started cri-containerd-fcc081e879c4347cfb3daef43c78a10008345986aa33c277f1d3e82940bbd352.scope. Dec 13 01:58:35.280147 systemd[1]: cri-containerd-fcc081e879c4347cfb3daef43c78a10008345986aa33c277f1d3e82940bbd352.scope: Deactivated successfully. Dec 13 01:58:35.395618 env[1215]: time="2024-12-13T01:58:35.395515012Z" level=info msg="StartContainer for \"fcc081e879c4347cfb3daef43c78a10008345986aa33c277f1d3e82940bbd352\" returns successfully" Dec 13 01:58:35.509648 env[1215]: time="2024-12-13T01:58:35.509601006Z" level=info msg="shim disconnected" id=fcc081e879c4347cfb3daef43c78a10008345986aa33c277f1d3e82940bbd352 Dec 13 01:58:35.509648 env[1215]: time="2024-12-13T01:58:35.509645319Z" level=warning msg="cleaning up after shim disconnected" id=fcc081e879c4347cfb3daef43c78a10008345986aa33c277f1d3e82940bbd352 namespace=k8s.io Dec 13 01:58:35.509648 env[1215]: time="2024-12-13T01:58:35.509653214Z" level=info msg="cleaning up dead shim" Dec 13 01:58:35.516380 env[1215]: time="2024-12-13T01:58:35.516332820Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:58:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3346 runtime=io.containerd.runc.v2\n" Dec 13 01:58:35.672052 kubelet[1414]: E1213 01:58:35.671996 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:36.045836 systemd[1]: run-containerd-runc-k8s.io-fcc081e879c4347cfb3daef43c78a10008345986aa33c277f1d3e82940bbd352-runc.ydWNDM.mount: Deactivated successfully. Dec 13 01:58:36.045918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcc081e879c4347cfb3daef43c78a10008345986aa33c277f1d3e82940bbd352-rootfs.mount: Deactivated successfully. Dec 13 01:58:36.231520 kubelet[1414]: E1213 01:58:36.231490 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:36.231725 kubelet[1414]: E1213 01:58:36.231555 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:36.232990 env[1215]: time="2024-12-13T01:58:36.232939282Z" level=info msg="CreateContainer within sandbox \"7a68ba838d461ddc6ddeb65091261f88ba9b0688c59421d0e6cf1a683ade851c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:58:36.486966 env[1215]: time="2024-12-13T01:58:36.486908853Z" level=info msg="CreateContainer within sandbox \"7a68ba838d461ddc6ddeb65091261f88ba9b0688c59421d0e6cf1a683ade851c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9bbc8c1957cc4cad96956b7a91a66b1fb73e47329b597130c699df243ea8c818\"" Dec 13 01:58:36.487526 env[1215]: time="2024-12-13T01:58:36.487473483Z" level=info msg="StartContainer for \"9bbc8c1957cc4cad96956b7a91a66b1fb73e47329b597130c699df243ea8c818\"" Dec 13 01:58:36.501922 systemd[1]: Started cri-containerd-9bbc8c1957cc4cad96956b7a91a66b1fb73e47329b597130c699df243ea8c818.scope. Dec 13 01:58:36.532266 env[1215]: time="2024-12-13T01:58:36.532216935Z" level=info msg="StartContainer for \"9bbc8c1957cc4cad96956b7a91a66b1fb73e47329b597130c699df243ea8c818\" returns successfully" Dec 13 01:58:36.672651 kubelet[1414]: E1213 01:58:36.672567 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:36.706185 kubelet[1414]: I1213 01:58:36.705409 1414 setters.go:600] "Node became not ready" node="10.0.0.123" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:58:36Z","lastTransitionTime":"2024-12-13T01:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:58:36.802003 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 01:58:37.045943 systemd[1]: run-containerd-runc-k8s.io-9bbc8c1957cc4cad96956b7a91a66b1fb73e47329b597130c699df243ea8c818-runc.fNZQ9n.mount: Deactivated successfully. Dec 13 01:58:37.235844 kubelet[1414]: E1213 01:58:37.235798 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:37.249052 kubelet[1414]: I1213 01:58:37.249011 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s9xpl" podStartSLOduration=6.248998501 podStartE2EDuration="6.248998501s" podCreationTimestamp="2024-12-13 01:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:58:37.248820385 +0000 UTC m=+72.855017924" watchObservedRunningTime="2024-12-13 01:58:37.248998501 +0000 UTC m=+72.855196029" Dec 13 01:58:37.672869 kubelet[1414]: E1213 01:58:37.672808 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:38.673593 kubelet[1414]: E1213 01:58:38.673535 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:39.061617 kubelet[1414]: E1213 01:58:39.061505 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:39.318545 systemd-networkd[1035]: lxc_health: Link UP Dec 13 01:58:39.329098 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 01:58:39.326985 systemd-networkd[1035]: lxc_health: Gained carrier Dec 13 01:58:39.674283 kubelet[1414]: E1213 01:58:39.674209 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:39.723336 systemd[1]: run-containerd-runc-k8s.io-9bbc8c1957cc4cad96956b7a91a66b1fb73e47329b597130c699df243ea8c818-runc.taT2FL.mount: Deactivated successfully. Dec 13 01:58:40.466176 systemd-networkd[1035]: lxc_health: Gained IPv6LL Dec 13 01:58:40.674914 kubelet[1414]: E1213 01:58:40.674865 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:41.062602 kubelet[1414]: E1213 01:58:41.062552 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:41.243431 kubelet[1414]: E1213 01:58:41.243387 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:41.675733 kubelet[1414]: E1213 01:58:41.675601 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:41.834227 systemd[1]: run-containerd-runc-k8s.io-9bbc8c1957cc4cad96956b7a91a66b1fb73e47329b597130c699df243ea8c818-runc.pDjxcA.mount: Deactivated successfully. Dec 13 01:58:42.244818 kubelet[1414]: E1213 01:58:42.244791 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:42.676438 kubelet[1414]: E1213 01:58:42.676385 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:43.677148 kubelet[1414]: E1213 01:58:43.677074 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:43.924002 systemd[1]: run-containerd-runc-k8s.io-9bbc8c1957cc4cad96956b7a91a66b1fb73e47329b597130c699df243ea8c818-runc.EgxACc.mount: Deactivated successfully. Dec 13 01:58:44.627579 kubelet[1414]: E1213 01:58:44.627526 1414 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:44.677298 kubelet[1414]: E1213 01:58:44.677255 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:45.677454 kubelet[1414]: E1213 01:58:45.677395 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:46.005642 systemd[1]: run-containerd-runc-k8s.io-9bbc8c1957cc4cad96956b7a91a66b1fb73e47329b597130c699df243ea8c818-runc.GviHwQ.mount: Deactivated successfully. Dec 13 01:58:46.678165 kubelet[1414]: E1213 01:58:46.678107 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:47.678455 kubelet[1414]: E1213 01:58:47.678392 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"