Dec 13 14:24:40.057946 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:24:40.057967 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:24:40.057978 kernel: BIOS-provided physical RAM map: Dec 13 14:24:40.057984 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 14:24:40.057989 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 14:24:40.057995 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 14:24:40.058001 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 14:24:40.058007 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 14:24:40.058013 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 14:24:40.058020 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 14:24:40.058026 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Dec 13 14:24:40.058032 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Dec 13 14:24:40.058037 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 14:24:40.058043 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 14:24:40.058050 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 14:24:40.058058 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 14:24:40.058064 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 14:24:40.058070 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 14:24:40.058076 kernel: NX (Execute Disable) protection: active Dec 13 14:24:40.058082 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Dec 13 14:24:40.058088 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Dec 13 14:24:40.058096 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Dec 13 14:24:40.058102 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Dec 13 14:24:40.058108 kernel: extended physical RAM map: Dec 13 14:24:40.058114 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 14:24:40.058122 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 14:24:40.058128 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 14:24:40.058134 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 14:24:40.058140 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 14:24:40.058146 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 14:24:40.058152 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 14:24:40.058158 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Dec 13 14:24:40.058164 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Dec 13 14:24:40.058170 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Dec 13 14:24:40.058176 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Dec 13 14:24:40.058182 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Dec 13 14:24:40.058193 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Dec 13 14:24:40.058199 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 14:24:40.058205 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 14:24:40.058211 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 14:24:40.058220 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 14:24:40.058227 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 14:24:40.058233 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 14:24:40.058241 kernel: efi: EFI v2.70 by EDK II Dec 13 14:24:40.058247 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Dec 13 14:24:40.058254 kernel: random: crng init done Dec 13 14:24:40.058260 kernel: SMBIOS 2.8 present. Dec 13 14:24:40.058267 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Dec 13 14:24:40.058273 kernel: Hypervisor detected: KVM Dec 13 14:24:40.058280 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:24:40.058286 kernel: kvm-clock: cpu 0, msr 1119a001, primary cpu clock Dec 13 14:24:40.058293 kernel: kvm-clock: using sched offset of 5370219423 cycles Dec 13 14:24:40.058304 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:24:40.058311 kernel: tsc: Detected 2794.748 MHz processor Dec 13 14:24:40.058318 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:24:40.058324 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:24:40.058331 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Dec 13 14:24:40.058337 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:24:40.058344 kernel: Using GB pages for direct mapping Dec 13 14:24:40.058351 kernel: Secure boot disabled Dec 13 14:24:40.058358 kernel: ACPI: Early table checksum verification disabled Dec 13 14:24:40.058366 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 13 14:24:40.058372 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 14:24:40.058379 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:24:40.058386 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:24:40.058392 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 13 14:24:40.058399 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:24:40.058407 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:24:40.058418 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:24:40.058427 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:24:40.058439 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 14:24:40.058448 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 13 14:24:40.058456 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Dec 13 14:24:40.058464 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 13 14:24:40.058473 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 13 14:24:40.058480 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 13 14:24:40.058488 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 13 14:24:40.058496 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 13 14:24:40.058504 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 13 14:24:40.058514 kernel: No NUMA configuration found Dec 13 14:24:40.058525 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Dec 13 14:24:40.058534 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Dec 13 14:24:40.058543 kernel: Zone ranges: Dec 13 14:24:40.058551 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:24:40.058562 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Dec 13 14:24:40.058570 kernel: Normal empty Dec 13 14:24:40.058579 kernel: Movable zone start for each node Dec 13 14:24:40.058587 kernel: Early memory node ranges Dec 13 14:24:40.058597 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 14:24:40.058604 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 13 14:24:40.058610 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 13 14:24:40.058617 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Dec 13 14:24:40.058623 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Dec 13 14:24:40.058630 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Dec 13 14:24:40.058636 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Dec 13 14:24:40.058643 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:24:40.058649 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 14:24:40.058656 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 13 14:24:40.058664 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:24:40.058670 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Dec 13 14:24:40.058677 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 13 14:24:40.058684 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Dec 13 14:24:40.058690 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 14:24:40.058697 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:24:40.058703 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:24:40.058710 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 14:24:40.058717 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:24:40.058738 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:24:40.058744 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:24:40.058751 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:24:40.058760 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:24:40.058767 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 14:24:40.058773 kernel: TSC deadline timer available Dec 13 14:24:40.058780 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 14:24:40.058788 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 14:24:40.058795 kernel: kvm-guest: setup PV sched yield Dec 13 14:24:40.058803 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 14:24:40.058810 kernel: Booting paravirtualized kernel on KVM Dec 13 14:24:40.058822 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:24:40.058830 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Dec 13 14:24:40.058837 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Dec 13 14:24:40.058844 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Dec 13 14:24:40.058851 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 14:24:40.058858 kernel: kvm-guest: setup async PF for cpu 0 Dec 13 14:24:40.058865 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Dec 13 14:24:40.058872 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:24:40.058878 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:24:40.058885 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Dec 13 14:24:40.058894 kernel: Policy zone: DMA32 Dec 13 14:24:40.058902 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:24:40.058909 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:24:40.058916 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:24:40.058932 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:24:40.058940 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:24:40.058948 kernel: Memory: 2397432K/2567000K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 169308K reserved, 0K cma-reserved) Dec 13 14:24:40.058955 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 14:24:40.058962 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:24:40.058969 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:24:40.058976 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:24:40.058983 kernel: rcu: RCU event tracing is enabled. Dec 13 14:24:40.058990 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 14:24:40.058999 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:24:40.059006 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:24:40.059013 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:24:40.059020 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 14:24:40.059027 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 14:24:40.059034 kernel: Console: colour dummy device 80x25 Dec 13 14:24:40.059041 kernel: printk: console [ttyS0] enabled Dec 13 14:24:40.059048 kernel: ACPI: Core revision 20210730 Dec 13 14:24:40.059055 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 14:24:40.059063 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:24:40.059070 kernel: x2apic enabled Dec 13 14:24:40.059077 kernel: Switched APIC routing to physical x2apic. Dec 13 14:24:40.059084 kernel: kvm-guest: setup PV IPIs Dec 13 14:24:40.059091 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 14:24:40.059098 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 14:24:40.059105 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 14:24:40.059112 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 14:24:40.059119 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 14:24:40.059127 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 14:24:40.059134 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:24:40.059143 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:24:40.059151 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:24:40.059157 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:24:40.059164 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 14:24:40.059171 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 14:24:40.059180 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 14:24:40.059188 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 14:24:40.059197 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:24:40.059204 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:24:40.059211 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:24:40.059217 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:24:40.059224 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 14:24:40.059232 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:24:40.059238 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:24:40.059245 kernel: LSM: Security Framework initializing Dec 13 14:24:40.059252 kernel: SELinux: Initializing. Dec 13 14:24:40.059261 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:24:40.059268 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:24:40.059276 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 14:24:40.059282 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 14:24:40.059289 kernel: ... version: 0 Dec 13 14:24:40.059296 kernel: ... bit width: 48 Dec 13 14:24:40.059303 kernel: ... generic registers: 6 Dec 13 14:24:40.059310 kernel: ... value mask: 0000ffffffffffff Dec 13 14:24:40.059317 kernel: ... max period: 00007fffffffffff Dec 13 14:24:40.059325 kernel: ... fixed-purpose events: 0 Dec 13 14:24:40.059332 kernel: ... event mask: 000000000000003f Dec 13 14:24:40.059339 kernel: signal: max sigframe size: 1776 Dec 13 14:24:40.059346 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:24:40.059353 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:24:40.059360 kernel: x86: Booting SMP configuration: Dec 13 14:24:40.059366 kernel: .... node #0, CPUs: #1 Dec 13 14:24:40.059373 kernel: kvm-clock: cpu 1, msr 1119a041, secondary cpu clock Dec 13 14:24:40.059380 kernel: kvm-guest: setup async PF for cpu 1 Dec 13 14:24:40.059388 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Dec 13 14:24:40.059395 kernel: #2 Dec 13 14:24:40.059402 kernel: kvm-clock: cpu 2, msr 1119a081, secondary cpu clock Dec 13 14:24:40.059409 kernel: kvm-guest: setup async PF for cpu 2 Dec 13 14:24:40.059416 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Dec 13 14:24:40.059423 kernel: #3 Dec 13 14:24:40.059439 kernel: kvm-clock: cpu 3, msr 1119a0c1, secondary cpu clock Dec 13 14:24:40.059463 kernel: kvm-guest: setup async PF for cpu 3 Dec 13 14:24:40.059471 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Dec 13 14:24:40.059489 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 14:24:40.059496 kernel: smpboot: Max logical packages: 1 Dec 13 14:24:40.059503 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 14:24:40.059510 kernel: devtmpfs: initialized Dec 13 14:24:40.059520 kernel: x86/mm: Memory block size: 128MB Dec 13 14:24:40.059527 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 13 14:24:40.059534 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 13 14:24:40.059541 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Dec 13 14:24:40.059548 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 13 14:24:40.059558 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 13 14:24:40.059565 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:24:40.059572 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 14:24:40.059579 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:24:40.059586 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:24:40.059593 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:24:40.059600 kernel: audit: type=2000 audit(1734099878.951:1): state=initialized audit_enabled=0 res=1 Dec 13 14:24:40.059607 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:24:40.059614 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:24:40.059622 kernel: cpuidle: using governor menu Dec 13 14:24:40.059629 kernel: ACPI: bus type PCI registered Dec 13 14:24:40.059636 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:24:40.059708 kernel: dca service started, version 1.12.1 Dec 13 14:24:40.059715 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 14:24:40.059735 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 14:24:40.059742 kernel: PCI: Using configuration type 1 for base access Dec 13 14:24:40.059749 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:24:40.059756 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:24:40.059772 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:24:40.059779 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:24:40.059786 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:24:40.059793 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:24:40.059800 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:24:40.059807 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:24:40.059814 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:24:40.059821 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:24:40.059828 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:24:40.059837 kernel: ACPI: Interpreter enabled Dec 13 14:24:40.059844 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 14:24:40.059851 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:24:40.059858 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:24:40.059865 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 14:24:40.059872 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:24:40.060048 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:24:40.060131 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 14:24:40.060209 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 14:24:40.060218 kernel: PCI host bridge to bus 0000:00 Dec 13 14:24:40.060337 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:24:40.060450 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:24:40.060559 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:24:40.060654 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 14:24:40.060760 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 14:24:40.060845 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Dec 13 14:24:40.060914 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:24:40.061036 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 14:24:40.061161 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 14:24:40.061245 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Dec 13 14:24:40.061351 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Dec 13 14:24:40.061454 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Dec 13 14:24:40.061539 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Dec 13 14:24:40.061620 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:24:40.061719 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:24:40.061834 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Dec 13 14:24:40.061936 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Dec 13 14:24:40.062018 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Dec 13 14:24:40.062116 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 14:24:40.062194 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Dec 13 14:24:40.062283 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Dec 13 14:24:40.062375 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Dec 13 14:24:40.062503 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 14:24:40.062628 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Dec 13 14:24:40.062761 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Dec 13 14:24:40.062845 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Dec 13 14:24:40.062934 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Dec 13 14:24:40.063044 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 14:24:40.063129 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 14:24:40.063220 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 14:24:40.063299 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Dec 13 14:24:40.063377 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Dec 13 14:24:40.063515 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 14:24:40.063623 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Dec 13 14:24:40.063638 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:24:40.063647 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:24:40.063657 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:24:40.063666 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:24:40.063675 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 14:24:40.063687 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 14:24:40.063697 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 14:24:40.063704 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 14:24:40.063711 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 14:24:40.063718 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 14:24:40.063738 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 14:24:40.063745 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 14:24:40.063752 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 14:24:40.063759 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 14:24:40.063768 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 14:24:40.063775 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 14:24:40.063782 kernel: iommu: Default domain type: Translated Dec 13 14:24:40.063789 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:24:40.063875 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 14:24:40.063963 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:24:40.064041 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 14:24:40.064050 kernel: vgaarb: loaded Dec 13 14:24:40.064058 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:24:40.064068 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:24:40.064075 kernel: PTP clock support registered Dec 13 14:24:40.064082 kernel: Registered efivars operations Dec 13 14:24:40.064089 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:24:40.064096 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:24:40.064103 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 13 14:24:40.064110 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Dec 13 14:24:40.064117 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Dec 13 14:24:40.064124 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Dec 13 14:24:40.064132 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Dec 13 14:24:40.064139 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Dec 13 14:24:40.064146 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 14:24:40.064153 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 14:24:40.064160 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:24:40.064167 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:24:40.064176 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:24:40.064190 kernel: pnp: PnP ACPI init Dec 13 14:24:40.064339 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 14:24:40.064360 kernel: pnp: PnP ACPI: found 6 devices Dec 13 14:24:40.064370 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:24:40.064380 kernel: NET: Registered PF_INET protocol family Dec 13 14:24:40.064390 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:24:40.064399 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:24:40.064408 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:24:40.064417 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:24:40.064432 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:24:40.064447 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:24:40.064456 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:24:40.064466 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:24:40.064475 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:24:40.064489 kernel: NET: Registered PF_XDP protocol family Dec 13 14:24:40.064606 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Dec 13 14:24:40.064713 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Dec 13 14:24:40.064824 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:24:40.064899 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:24:40.064975 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:24:40.065043 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 14:24:40.065113 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 14:24:40.065191 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Dec 13 14:24:40.065204 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:24:40.065214 kernel: Initialise system trusted keyrings Dec 13 14:24:40.065223 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:24:40.065236 kernel: Key type asymmetric registered Dec 13 14:24:40.065245 kernel: Asymmetric key parser 'x509' registered Dec 13 14:24:40.065254 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:24:40.065276 kernel: io scheduler mq-deadline registered Dec 13 14:24:40.065287 kernel: io scheduler kyber registered Dec 13 14:24:40.065295 kernel: io scheduler bfq registered Dec 13 14:24:40.065303 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:24:40.065311 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 14:24:40.065318 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 14:24:40.065326 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 14:24:40.065335 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:24:40.065343 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:24:40.065351 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:24:40.065358 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:24:40.065366 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:24:40.065373 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:24:40.065473 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 14:24:40.065564 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 14:24:40.066485 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T14:24:39 UTC (1734099879) Dec 13 14:24:40.066576 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 14:24:40.066586 kernel: efifb: probing for efifb Dec 13 14:24:40.066594 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 13 14:24:40.066602 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 13 14:24:40.066609 kernel: efifb: scrolling: redraw Dec 13 14:24:40.066616 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 14:24:40.066624 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 14:24:40.066635 kernel: fb0: EFI VGA frame buffer device Dec 13 14:24:40.066642 kernel: pstore: Registered efi as persistent store backend Dec 13 14:24:40.066650 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:24:40.066657 kernel: Segment Routing with IPv6 Dec 13 14:24:40.066666 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:24:40.066675 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:24:40.066687 kernel: Key type dns_resolver registered Dec 13 14:24:40.066696 kernel: IPI shorthand broadcast: enabled Dec 13 14:24:40.066706 kernel: sched_clock: Marking stable (648090223, 166990327)->(846834727, -31754177) Dec 13 14:24:40.066715 kernel: registered taskstats version 1 Dec 13 14:24:40.066814 kernel: Loading compiled-in X.509 certificates Dec 13 14:24:40.066828 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:24:40.066837 kernel: Key type .fscrypt registered Dec 13 14:24:40.066846 kernel: Key type fscrypt-provisioning registered Dec 13 14:24:40.066855 kernel: pstore: Using crash dump compression: deflate Dec 13 14:24:40.066870 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:24:40.066880 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:24:40.066890 kernel: ima: No architecture policies found Dec 13 14:24:40.066900 kernel: clk: Disabling unused clocks Dec 13 14:24:40.066909 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:24:40.066930 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:24:40.066939 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:24:40.066949 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:24:40.066959 kernel: Run /init as init process Dec 13 14:24:40.066971 kernel: with arguments: Dec 13 14:24:40.066980 kernel: /init Dec 13 14:24:40.066990 kernel: with environment: Dec 13 14:24:40.066999 kernel: HOME=/ Dec 13 14:24:40.067008 kernel: TERM=linux Dec 13 14:24:40.067018 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:24:40.067031 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:24:40.067044 systemd[1]: Detected virtualization kvm. Dec 13 14:24:40.067056 systemd[1]: Detected architecture x86-64. Dec 13 14:24:40.067064 systemd[1]: Running in initrd. Dec 13 14:24:40.067072 systemd[1]: No hostname configured, using default hostname. Dec 13 14:24:40.067080 systemd[1]: Hostname set to . Dec 13 14:24:40.067088 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:24:40.067096 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:24:40.067104 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:24:40.067112 systemd[1]: Reached target cryptsetup.target. Dec 13 14:24:40.067121 systemd[1]: Reached target paths.target. Dec 13 14:24:40.067129 systemd[1]: Reached target slices.target. Dec 13 14:24:40.067137 systemd[1]: Reached target swap.target. Dec 13 14:24:40.067145 systemd[1]: Reached target timers.target. Dec 13 14:24:40.067153 systemd[1]: Listening on iscsid.socket. Dec 13 14:24:40.067163 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:24:40.067172 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:24:40.067183 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:24:40.067195 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:24:40.067205 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:24:40.067215 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:24:40.067227 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:24:40.067237 systemd[1]: Reached target sockets.target. Dec 13 14:24:40.067245 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:24:40.067253 systemd[1]: Finished network-cleanup.service. Dec 13 14:24:40.067261 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:24:40.067269 systemd[1]: Starting systemd-journald.service... Dec 13 14:24:40.067279 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:24:40.067290 systemd[1]: Starting systemd-resolved.service... Dec 13 14:24:40.067300 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:24:40.067311 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:24:40.067319 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:24:40.067327 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:24:40.067335 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:24:40.067343 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:24:40.067355 systemd-journald[198]: Journal started Dec 13 14:24:40.067410 systemd-journald[198]: Runtime Journal (/run/log/journal/72e2d8711200433b96f3c7f142bcbbc4) is 6.0M, max 48.4M, 42.4M free. Dec 13 14:24:40.056589 systemd-modules-load[199]: Inserted module 'overlay' Dec 13 14:24:40.074217 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:24:40.074244 kernel: audit: type=1130 audit(1734099880.068:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:40.074261 systemd[1]: Started systemd-journald.service. Dec 13 14:24:40.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:40.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:40.078763 kernel: audit: type=1130 audit(1734099880.074:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:40.081630 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:24:40.091023 kernel: audit: type=1130 audit(1734099880.084:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:40.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:40.086795 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:24:40.093332 systemd-resolved[200]: Positive Trust Anchors: Dec 13 14:24:40.093351 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:24:40.093391 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:24:40.096363 systemd-resolved[200]: Defaulting to hostname 'linux'. Dec 13 14:24:40.110387 kernel: audit: type=1130 audit(1734099880.103:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:40.110435 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:24:40.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:40.110502 dracut-cmdline[216]: dracut-dracut-053 Dec 13 14:24:40.110502 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:24:40.100138 systemd[1]: Started systemd-resolved.service. Dec 13 14:24:40.103987 systemd[1]: Reached target nss-lookup.target. Dec 13 14:24:40.119404 kernel: Bridge firewalling registered Dec 13 14:24:40.118636 systemd-modules-load[199]: Inserted module 'br_netfilter' Dec 13 14:24:40.140763 kernel: SCSI subsystem initialized Dec 13 14:24:40.152973 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:24:40.153026 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:24:40.153042 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:24:40.157688 systemd-modules-load[199]: Inserted module 'dm_multipath' Dec 13 14:24:40.159011 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:24:40.163888 kernel: audit: type=1130 audit(1734099880.159:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:40.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:40.161032 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:24:40.170703 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:24:40.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:40.176766 kernel: audit: type=1130 audit(1734099880.171:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:40.189775 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:24:40.213765 kernel: iscsi: registered transport (tcp) Dec 13 14:24:40.238324 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:24:40.238419 kernel: QLogic iSCSI HBA Driver Dec 13 14:24:40.271296 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:24:40.276250 kernel: audit: type=1130 audit(1734099880.270:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:40.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:40.272665 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:24:40.322768 kernel: raid6: avx2x4 gen() 24547 MB/s Dec 13 14:24:40.339769 kernel: raid6: avx2x4 xor() 6298 MB/s Dec 13 14:24:40.356763 kernel: raid6: avx2x2 gen() 25917 MB/s Dec 13 14:24:40.373764 kernel: raid6: avx2x2 xor() 16633 MB/s Dec 13 14:24:40.390781 kernel: raid6: avx2x1 gen() 21287 MB/s Dec 13 14:24:40.407768 kernel: raid6: avx2x1 xor() 13310 MB/s Dec 13 14:24:40.424771 kernel: raid6: sse2x4 gen() 12360 MB/s Dec 13 14:24:40.441771 kernel: raid6: sse2x4 xor() 4994 MB/s Dec 13 14:24:40.458761 kernel: raid6: sse2x2 gen() 13565 MB/s Dec 13 14:24:40.475764 kernel: raid6: sse2x2 xor() 8548 MB/s Dec 13 14:24:40.492760 kernel: raid6: sse2x1 gen() 10678 MB/s Dec 13 14:24:40.510355 kernel: raid6: sse2x1 xor() 6759 MB/s Dec 13 14:24:40.510386 kernel: raid6: using algorithm avx2x2 gen() 25917 MB/s Dec 13 14:24:40.510397 kernel: raid6: .... xor() 16633 MB/s, rmw enabled Dec 13 14:24:40.511154 kernel: raid6: using avx2x2 recovery algorithm Dec 13 14:24:40.525780 kernel: xor: automatically using best checksumming function avx Dec 13 14:24:40.633850 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:24:40.643300 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:24:40.649814 kernel: audit: type=1130 audit(1734099880.643:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:40.649855 kernel: audit: type=1334 audit(1734099880.647:10): prog-id=7 op=LOAD Dec 13 14:24:40.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:40.647000 audit: BPF prog-id=7 op=LOAD Dec 13 14:24:40.648000 audit: BPF prog-id=8 op=LOAD Dec 13 14:24:40.650322 systemd[1]: Starting systemd-udevd.service... Dec 13 14:24:40.666283 systemd-udevd[401]: Using default interface naming scheme 'v252'. Dec 13 14:24:40.671810 systemd[1]: Started systemd-udevd.service. Dec 13 14:24:40.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:40.674613 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:24:40.687090 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Dec 13 14:24:40.717350 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:24:40.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:40.720048 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:24:40.760406 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:24:40.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:40.791750 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:24:40.795207 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 14:24:40.802540 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:24:40.802563 kernel: GPT:9289727 != 19775487 Dec 13 14:24:40.802573 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:24:40.802584 kernel: GPT:9289727 != 19775487 Dec 13 14:24:40.802594 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:24:40.802604 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:24:40.810750 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:24:40.811744 kernel: libata version 3.00 loaded. Dec 13 14:24:40.815748 kernel: AES CTR mode by8 optimization enabled Dec 13 14:24:40.821165 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 14:24:40.839045 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 14:24:40.839065 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 14:24:40.839179 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 14:24:40.839274 kernel: scsi host0: ahci Dec 13 14:24:40.839388 kernel: scsi host1: ahci Dec 13 14:24:40.839506 kernel: scsi host2: ahci Dec 13 14:24:40.839605 kernel: scsi host3: ahci Dec 13 14:24:40.839741 kernel: scsi host4: ahci Dec 13 14:24:40.839845 kernel: scsi host5: ahci Dec 13 14:24:40.839955 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Dec 13 14:24:40.839966 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Dec 13 14:24:40.839977 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Dec 13 14:24:40.839987 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Dec 13 14:24:40.839998 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Dec 13 14:24:40.840011 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Dec 13 14:24:40.851143 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:24:40.854269 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (455) Dec 13 14:24:40.856633 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:24:40.859198 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:24:40.864628 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:24:40.868660 systemd[1]: Starting disk-uuid.service... Dec 13 14:24:40.873260 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:24:40.878205 disk-uuid[528]: Primary Header is updated. Dec 13 14:24:40.878205 disk-uuid[528]: Secondary Entries is updated. Dec 13 14:24:40.878205 disk-uuid[528]: Secondary Header is updated. Dec 13 14:24:40.882096 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:24:40.884742 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:24:41.149913 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 14:24:41.150018 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 14:24:41.150039 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 14:24:41.150049 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 14:24:41.150058 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 14:24:41.151755 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 14:24:41.152805 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 14:24:41.152907 kernel: ata3.00: applying bridge limits Dec 13 14:24:41.154225 kernel: ata3.00: configured for UDMA/100 Dec 13 14:24:41.154765 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 14:24:41.193122 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 14:24:41.210770 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:24:41.210795 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 14:24:41.891674 disk-uuid[534]: The operation has completed successfully. Dec 13 14:24:41.893067 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:24:41.922382 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:24:41.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:41.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:41.922504 systemd[1]: Finished disk-uuid.service. Dec 13 14:24:41.932603 systemd[1]: Starting verity-setup.service... Dec 13 14:24:41.950905 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 14:24:41.979374 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:24:41.983076 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:24:41.986028 systemd[1]: Finished verity-setup.service. Dec 13 14:24:41.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:42.074747 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:24:42.075045 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:24:42.077365 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:24:42.080643 systemd[1]: Starting ignition-setup.service... Dec 13 14:24:42.083077 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:24:42.098566 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:24:42.098641 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:24:42.098656 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:24:42.107465 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:24:42.159665 systemd[1]: Finished ignition-setup.service. Dec 13 14:24:42.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:42.160745 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:24:42.167383 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:24:42.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:42.168000 audit: BPF prog-id=9 op=LOAD Dec 13 14:24:42.170101 systemd[1]: Starting systemd-networkd.service... Dec 13 14:24:42.196817 systemd-networkd[721]: lo: Link UP Dec 13 14:24:42.196825 systemd-networkd[721]: lo: Gained carrier Dec 13 14:24:42.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:42.197275 systemd-networkd[721]: Enumeration completed Dec 13 14:24:42.197482 systemd-networkd[721]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:24:42.197855 systemd[1]: Started systemd-networkd.service. Dec 13 14:24:42.199349 systemd[1]: Reached target network.target. Dec 13 14:24:42.199705 systemd-networkd[721]: eth0: Link UP Dec 13 14:24:42.199708 systemd-networkd[721]: eth0: Gained carrier Dec 13 14:24:42.202048 systemd[1]: Starting iscsiuio.service... Dec 13 14:24:42.234490 systemd[1]: Started iscsiuio.service. Dec 13 14:24:42.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:42.236602 systemd[1]: Starting iscsid.service... Dec 13 14:24:42.242001 iscsid[731]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:24:42.242001 iscsid[731]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:24:42.242001 iscsid[731]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:24:42.242001 iscsid[731]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:24:42.242001 iscsid[731]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:24:42.242001 iscsid[731]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:24:42.285239 systemd[1]: Started iscsid.service. Dec 13 14:24:42.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:42.287004 systemd-networkd[721]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:24:42.288831 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:24:42.291994 ignition[717]: Ignition 2.14.0 Dec 13 14:24:42.292003 ignition[717]: Stage: fetch-offline Dec 13 14:24:42.292073 ignition[717]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:24:42.292095 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:24:42.292253 ignition[717]: parsed url from cmdline: "" Dec 13 14:24:42.292259 ignition[717]: no config URL provided Dec 13 14:24:42.292266 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:24:42.292276 ignition[717]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:24:42.292302 ignition[717]: op(1): [started] loading QEMU firmware config module Dec 13 14:24:42.292312 ignition[717]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 14:24:42.301790 ignition[717]: op(1): [finished] loading QEMU firmware config module Dec 13 14:24:42.304144 ignition[717]: parsing config with SHA512: a82866d5f3019ce7705df9b6474bdca930f403a4fff5d8fbc9b7bdb63d79e87c6fe6ae466fc1e8b44f918c14fb107821faf3c397a428680388302135147c9c59 Dec 13 14:24:42.303663 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:24:42.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:42.304827 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:24:42.306559 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:24:42.307536 systemd[1]: Reached target remote-fs.target. Dec 13 14:24:42.309434 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:24:42.314586 unknown[717]: fetched base config from "system" Dec 13 14:24:42.314602 unknown[717]: fetched user config from "qemu" Dec 13 14:24:42.315256 ignition[717]: fetch-offline: fetch-offline passed Dec 13 14:24:42.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:42.316582 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:24:42.315338 ignition[717]: Ignition finished successfully Dec 13 14:24:42.317678 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 14:24:42.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:42.318510 systemd[1]: Starting ignition-kargs.service... Dec 13 14:24:42.323427 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:24:42.338149 ignition[745]: Ignition 2.14.0 Dec 13 14:24:42.338155 ignition[745]: Stage: kargs Dec 13 14:24:42.338259 ignition[745]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:24:42.340391 systemd[1]: Finished ignition-kargs.service. Dec 13 14:24:42.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:42.338271 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:24:42.338943 ignition[745]: kargs: kargs passed Dec 13 14:24:42.343025 systemd[1]: Starting ignition-disks.service... Dec 13 14:24:42.338986 ignition[745]: Ignition finished successfully Dec 13 14:24:42.350832 ignition[754]: Ignition 2.14.0 Dec 13 14:24:42.350843 ignition[754]: Stage: disks Dec 13 14:24:42.350983 ignition[754]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:24:42.350993 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:24:42.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:42.352603 systemd[1]: Finished ignition-disks.service. Dec 13 14:24:42.351797 ignition[754]: disks: disks passed Dec 13 14:24:42.354274 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:24:42.351839 ignition[754]: Ignition finished successfully Dec 13 14:24:42.361304 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:24:42.362171 systemd[1]: Reached target local-fs.target. Dec 13 14:24:42.362238 systemd[1]: Reached target sysinit.target. Dec 13 14:24:42.362577 systemd[1]: Reached target basic.target. Dec 13 14:24:42.363882 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:24:42.376484 systemd-fsck[762]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 14:24:42.424593 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:24:42.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:42.429110 systemd[1]: Mounting sysroot.mount... Dec 13 14:24:42.442796 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:24:42.443818 systemd[1]: Mounted sysroot.mount. Dec 13 14:24:42.444224 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:24:42.447857 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:24:42.448382 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:24:42.448466 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:24:42.448499 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:24:42.451648 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:24:42.453824 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:24:42.461160 initrd-setup-root[772]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:24:42.464478 initrd-setup-root[780]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:24:42.467082 initrd-setup-root[788]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:24:42.469768 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:24:42.500923 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:24:42.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:42.502656 systemd[1]: Starting ignition-mount.service... Dec 13 14:24:42.504032 systemd[1]: Starting sysroot-boot.service... Dec 13 14:24:42.512058 bash[813]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 14:24:42.522179 systemd[1]: Finished sysroot-boot.service. Dec 13 14:24:42.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:42.524138 ignition[815]: INFO : Ignition 2.14.0 Dec 13 14:24:42.524138 ignition[815]: INFO : Stage: mount Dec 13 14:24:42.526021 ignition[815]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:24:42.526021 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:24:42.526021 ignition[815]: INFO : mount: mount passed Dec 13 14:24:42.526021 ignition[815]: INFO : Ignition finished successfully Dec 13 14:24:42.531312 systemd[1]: Finished ignition-mount.service. Dec 13 14:24:42.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:42.997776 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:24:43.008789 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (823) Dec 13 14:24:43.008850 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:24:43.010646 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:24:43.010749 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:24:43.016235 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:24:43.018028 systemd[1]: Starting ignition-files.service... Dec 13 14:24:43.038309 ignition[843]: INFO : Ignition 2.14.0 Dec 13 14:24:43.038309 ignition[843]: INFO : Stage: files Dec 13 14:24:43.040074 ignition[843]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:24:43.040074 ignition[843]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:24:43.043215 ignition[843]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:24:43.045329 ignition[843]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:24:43.045329 ignition[843]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:24:43.049866 ignition[843]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:24:43.051323 ignition[843]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:24:43.053284 unknown[843]: wrote ssh authorized keys file for user: core Dec 13 14:24:43.054424 ignition[843]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:24:43.056001 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:24:43.057828 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:24:43.059660 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:24:43.061437 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:24:43.063176 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:24:43.065631 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:24:43.065631 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:24:43.071000 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 14:24:43.423785 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 14:24:43.612451 systemd-networkd[721]: eth0: Gained IPv6LL Dec 13 14:24:43.879432 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:24:43.879432 ignition[843]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Dec 13 14:24:43.883390 ignition[843]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:24:43.885509 ignition[843]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:24:43.885509 ignition[843]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Dec 13 14:24:43.888913 ignition[843]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 14:24:43.888913 ignition[843]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:24:43.928332 ignition[843]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:24:43.930313 ignition[843]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 14:24:43.932255 ignition[843]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:24:43.934677 ignition[843]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:24:43.934677 ignition[843]: INFO : files: files passed Dec 13 14:24:43.934677 ignition[843]: INFO : Ignition finished successfully Dec 13 14:24:43.939036 systemd[1]: Finished ignition-files.service. Dec 13 14:24:43.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:43.942235 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:24:43.944537 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:24:43.947586 systemd[1]: Starting ignition-quench.service... Dec 13 14:24:43.949458 initrd-setup-root-after-ignition[869]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 14:24:43.949608 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:24:43.951210 systemd[1]: Finished ignition-quench.service. Dec 13 14:24:43.951468 initrd-setup-root-after-ignition[871]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:24:43.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:43.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:43.955669 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:24:43.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:43.957971 systemd[1]: Reached target ignition-complete.target. Dec 13 14:24:43.961023 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:24:43.976576 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:24:43.976676 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:24:43.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:43.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:43.979571 systemd[1]: Reached target initrd-fs.target. Dec 13 14:24:43.981317 systemd[1]: Reached target initrd.target. Dec 13 14:24:43.983144 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:24:43.985605 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:24:43.995456 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:24:43.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:43.998501 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:24:44.007572 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:24:44.009513 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:24:44.011556 systemd[1]: Stopped target timers.target. Dec 13 14:24:44.013298 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:24:44.014413 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:24:44.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.016354 systemd[1]: Stopped target initrd.target. Dec 13 14:24:44.018123 systemd[1]: Stopped target basic.target. Dec 13 14:24:44.019824 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:24:44.021816 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:24:44.023774 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:24:44.025783 systemd[1]: Stopped target remote-fs.target. Dec 13 14:24:44.027617 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:24:44.029538 systemd[1]: Stopped target sysinit.target. Dec 13 14:24:44.031309 systemd[1]: Stopped target local-fs.target. Dec 13 14:24:44.033137 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:24:44.035016 systemd[1]: Stopped target swap.target. Dec 13 14:24:44.036645 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:24:44.037788 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:24:44.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.039801 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:24:44.041607 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:24:44.042744 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:24:44.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.044643 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:24:44.045822 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:24:44.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.047890 systemd[1]: Stopped target paths.target. Dec 13 14:24:44.049547 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:24:44.054871 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:24:44.056825 systemd[1]: Stopped target slices.target. Dec 13 14:24:44.058437 systemd[1]: Stopped target sockets.target. Dec 13 14:24:44.060137 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:24:44.061605 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:24:44.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.064026 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:24:44.064183 systemd[1]: Stopped ignition-files.service. Dec 13 14:24:44.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.067859 systemd[1]: Stopping ignition-mount.service... Dec 13 14:24:44.069600 systemd[1]: Stopping iscsid.service... Dec 13 14:24:44.071170 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:24:44.072367 iscsid[731]: iscsid shutting down. Dec 13 14:24:44.072493 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:24:44.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.076406 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:24:44.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.078211 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:24:44.082051 ignition[884]: INFO : Ignition 2.14.0 Dec 13 14:24:44.082051 ignition[884]: INFO : Stage: umount Dec 13 14:24:44.082051 ignition[884]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:24:44.082051 ignition[884]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:24:44.082051 ignition[884]: INFO : umount: umount passed Dec 13 14:24:44.082051 ignition[884]: INFO : Ignition finished successfully Dec 13 14:24:44.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.078432 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:24:44.079828 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:24:44.079979 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:24:44.093951 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:24:44.095760 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:24:44.096823 systemd[1]: Stopped iscsid.service. Dec 13 14:24:44.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.099142 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:24:44.099230 systemd[1]: Stopped ignition-mount.service. Dec 13 14:24:44.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.102365 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:24:44.102454 systemd[1]: Closed iscsid.socket. Dec 13 14:24:44.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.102556 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:24:44.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.102592 systemd[1]: Stopped ignition-disks.service. Dec 13 14:24:44.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.104942 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:24:44.104977 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:24:44.106506 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:24:44.106547 systemd[1]: Stopped ignition-setup.service. Dec 13 14:24:44.118237 kernel: kauditd_printk_skb: 43 callbacks suppressed Dec 13 14:24:44.118262 kernel: audit: type=1131 audit(1734099884.112:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.108537 systemd[1]: Stopping iscsiuio.service... Dec 13 14:24:44.146008 kernel: audit: type=1130 audit(1734099884.119:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.146092 kernel: audit: type=1131 audit(1734099884.119:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.112246 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:24:44.112329 systemd[1]: Stopped iscsiuio.service. Dec 13 14:24:44.113798 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:24:44.113918 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:24:44.119333 systemd[1]: Stopped target network.target. Dec 13 14:24:44.147011 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:24:44.147119 systemd[1]: Closed iscsiuio.socket. Dec 13 14:24:44.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.149039 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:24:44.163893 kernel: audit: type=1131 audit(1734099884.158:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.163956 kernel: audit: type=1131 audit(1734099884.163:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.150675 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:24:44.170938 kernel: audit: type=1334 audit(1734099884.168:59): prog-id=6 op=UNLOAD Dec 13 14:24:44.168000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:24:44.155791 systemd-networkd[721]: eth0: DHCPv6 lease lost Dec 13 14:24:44.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.156791 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:24:44.183932 kernel: audit: type=1131 audit(1734099884.173:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.183963 kernel: audit: type=1334 audit(1734099884.176:61): prog-id=9 op=UNLOAD Dec 13 14:24:44.183973 kernel: audit: type=1131 audit(1734099884.179:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.176000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:24:44.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.157002 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:24:44.190219 kernel: audit: type=1131 audit(1734099884.184:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.160160 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:24:44.160342 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:24:44.164917 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:24:44.164955 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:24:44.168858 systemd[1]: Stopping network-cleanup.service... Dec 13 14:24:44.171880 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:24:44.171937 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:24:44.173707 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:24:44.173762 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:24:44.183708 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:24:44.183812 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:24:44.184926 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:24:44.192131 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:24:44.199562 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:24:44.201746 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:24:44.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.207284 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:24:44.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.207409 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:24:44.210130 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:24:44.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.210269 systemd[1]: Stopped network-cleanup.service. Dec 13 14:24:44.210589 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:24:44.210630 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:24:44.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.213604 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:24:44.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.213638 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:24:44.214849 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:24:44.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.214891 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:24:44.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.216843 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:24:44.216880 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:24:44.218581 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:24:44.218620 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:24:44.223151 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:24:44.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.223194 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:24:44.227808 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:24:44.230322 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:24:44.230430 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:24:44.241101 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:24:44.241212 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:24:44.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:44.248016 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:24:44.250911 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:24:44.266973 systemd[1]: Switching root. Dec 13 14:24:44.288239 systemd-journald[198]: Journal stopped Dec 13 14:24:47.653425 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Dec 13 14:24:47.653497 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:24:47.653510 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:24:47.653525 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:24:47.653535 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:24:47.653544 kernel: SELinux: policy capability open_perms=1 Dec 13 14:24:47.653554 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:24:47.653564 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:24:47.653573 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:24:47.653582 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:24:47.653593 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:24:47.653606 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:24:47.653617 systemd[1]: Successfully loaded SELinux policy in 44.251ms. Dec 13 14:24:47.653636 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.439ms. Dec 13 14:24:47.653648 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:24:47.653659 systemd[1]: Detected virtualization kvm. Dec 13 14:24:47.653669 systemd[1]: Detected architecture x86-64. Dec 13 14:24:47.653680 systemd[1]: Detected first boot. Dec 13 14:24:47.653690 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:24:47.653701 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:24:47.653711 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:24:47.653736 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:24:47.653764 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:24:47.653776 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:24:47.653788 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:24:47.653798 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:24:47.653810 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:24:47.653821 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:24:47.653831 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:24:47.653841 systemd[1]: Created slice system-getty.slice. Dec 13 14:24:47.653852 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:24:47.653862 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:24:47.653872 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:24:47.653882 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:24:47.653894 systemd[1]: Created slice user.slice. Dec 13 14:24:47.653904 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:24:47.653915 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:24:47.653927 systemd[1]: Set up automount boot.automount. Dec 13 14:24:47.653937 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:24:47.653947 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:24:47.653959 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:24:47.653969 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:24:47.653979 systemd[1]: Reached target integritysetup.target. Dec 13 14:24:47.653997 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:24:47.654007 systemd[1]: Reached target remote-fs.target. Dec 13 14:24:47.654018 systemd[1]: Reached target slices.target. Dec 13 14:24:47.654028 systemd[1]: Reached target swap.target. Dec 13 14:24:47.654039 systemd[1]: Reached target torcx.target. Dec 13 14:24:47.654050 systemd[1]: Reached target veritysetup.target. Dec 13 14:24:47.654060 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:24:47.654071 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:24:47.654081 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:24:47.654091 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:24:47.654101 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:24:47.654112 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:24:47.654122 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:24:47.654132 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:24:47.654142 systemd[1]: Mounting media.mount... Dec 13 14:24:47.654153 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:24:47.654163 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:24:47.654174 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:24:47.654184 systemd[1]: Mounting tmp.mount... Dec 13 14:24:47.654195 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:24:47.654205 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:24:47.654222 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:24:47.654232 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:24:47.654242 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:24:47.654254 systemd[1]: Starting modprobe@drm.service... Dec 13 14:24:47.654264 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:24:47.654274 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:24:47.654284 systemd[1]: Starting modprobe@loop.service... Dec 13 14:24:47.654294 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:24:47.654304 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:24:47.654314 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:24:47.654324 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:24:47.654336 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:24:47.654346 kernel: loop: module loaded Dec 13 14:24:47.654356 systemd[1]: Stopped systemd-journald.service. Dec 13 14:24:47.654365 kernel: fuse: init (API version 7.34) Dec 13 14:24:47.654376 systemd[1]: Starting systemd-journald.service... Dec 13 14:24:47.654386 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:24:47.654397 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:24:47.654407 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:24:47.654417 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:24:47.654427 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:24:47.654445 systemd[1]: Stopped verity-setup.service. Dec 13 14:24:47.654460 systemd-journald[1004]: Journal started Dec 13 14:24:47.654497 systemd-journald[1004]: Runtime Journal (/run/log/journal/72e2d8711200433b96f3c7f142bcbbc4) is 6.0M, max 48.4M, 42.4M free. Dec 13 14:24:44.359000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:24:44.541000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:24:44.541000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:24:44.541000 audit: BPF prog-id=10 op=LOAD Dec 13 14:24:44.541000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:24:44.541000 audit: BPF prog-id=11 op=LOAD Dec 13 14:24:44.541000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:24:44.579000 audit[918]: AVC avc: denied { associate } for pid=918 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:24:44.579000 audit[918]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001825d2 a1=c000186708 a2=c000190a00 a3=32 items=0 ppid=901 pid=918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:24:44.579000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:24:44.581000 audit[918]: AVC avc: denied { associate } for pid=918 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:24:44.581000 audit[918]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001826a9 a2=1ed a3=0 items=2 ppid=901 pid=918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:24:44.581000 audit: CWD cwd="/" Dec 13 14:24:44.581000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:44.581000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:44.581000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:24:47.506000 audit: BPF prog-id=12 op=LOAD Dec 13 14:24:47.506000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:24:47.506000 audit: BPF prog-id=13 op=LOAD Dec 13 14:24:47.506000 audit: BPF prog-id=14 op=LOAD Dec 13 14:24:47.506000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:24:47.506000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:24:47.507000 audit: BPF prog-id=15 op=LOAD Dec 13 14:24:47.507000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:24:47.507000 audit: BPF prog-id=16 op=LOAD Dec 13 14:24:47.507000 audit: BPF prog-id=17 op=LOAD Dec 13 14:24:47.507000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:24:47.507000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:24:47.508000 audit: BPF prog-id=18 op=LOAD Dec 13 14:24:47.508000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:24:47.509000 audit: BPF prog-id=19 op=LOAD Dec 13 14:24:47.509000 audit: BPF prog-id=20 op=LOAD Dec 13 14:24:47.509000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:24:47.509000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:24:47.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.520000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:24:47.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.634000 audit: BPF prog-id=21 op=LOAD Dec 13 14:24:47.634000 audit: BPF prog-id=22 op=LOAD Dec 13 14:24:47.634000 audit: BPF prog-id=23 op=LOAD Dec 13 14:24:47.634000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:24:47.634000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:24:47.650000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:24:47.650000 audit[1004]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc6142f350 a2=4000 a3=7ffc6142f3ec items=0 ppid=1 pid=1004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:24:47.650000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:24:47.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.505437 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:24:44.578581 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:24:47.505451 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 14:24:44.578912 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:44Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:24:47.510211 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:24:44.578935 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:44Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:24:44.578973 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:44Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:24:44.578985 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:44Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:24:44.579022 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:44Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:24:44.579037 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:44Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:24:44.579338 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:44Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:24:44.579385 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:44Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:24:44.579403 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:44Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:24:44.579809 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:44Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:24:44.579861 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:44Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:24:47.657736 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:24:44.579883 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:44Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:24:44.579903 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:44Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:24:44.579922 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:44Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:24:44.579938 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:44Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:24:47.184513 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:47Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:24:47.184848 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:47Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:24:47.184985 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:47Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:24:47.185521 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:47Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:24:47.185588 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:47Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:24:47.185684 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T14:24:47Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:24:47.661937 systemd[1]: Started systemd-journald.service. Dec 13 14:24:47.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.662674 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:24:47.663575 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:24:47.664428 systemd[1]: Mounted media.mount. Dec 13 14:24:47.665222 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:24:47.666174 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:24:47.667138 systemd[1]: Mounted tmp.mount. Dec 13 14:24:47.668152 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:24:47.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.669355 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:24:47.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.670527 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:24:47.670763 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:24:47.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.671954 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:24:47.672126 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:24:47.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.673244 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:24:47.673442 systemd[1]: Finished modprobe@drm.service. Dec 13 14:24:47.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.674535 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:24:47.674741 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:24:47.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.675957 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:24:47.676126 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:24:47.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.677251 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:24:47.677423 systemd[1]: Finished modprobe@loop.service. Dec 13 14:24:47.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.678574 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:24:47.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.679912 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:24:47.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.681205 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:24:47.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.682681 systemd[1]: Reached target network-pre.target. Dec 13 14:24:47.686142 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:24:47.688696 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:24:47.689683 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:24:47.692816 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:24:47.695193 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:24:47.696353 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:24:47.698159 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:24:47.699613 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:24:47.701993 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:24:47.704437 systemd-journald[1004]: Time spent on flushing to /var/log/journal/72e2d8711200433b96f3c7f142bcbbc4 is 17.894ms for 1149 entries. Dec 13 14:24:47.704437 systemd-journald[1004]: System Journal (/var/log/journal/72e2d8711200433b96f3c7f142bcbbc4) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:24:47.743952 systemd-journald[1004]: Received client request to flush runtime journal. Dec 13 14:24:47.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.705166 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:24:47.712082 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:24:47.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:47.713605 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:24:47.715137 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:24:47.716641 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:24:47.728697 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:24:47.731847 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:24:47.737627 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:24:47.740042 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:24:47.745223 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:24:47.748252 udevadm[1026]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:24:48.166669 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:24:48.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:48.167000 audit: BPF prog-id=24 op=LOAD Dec 13 14:24:48.167000 audit: BPF prog-id=25 op=LOAD Dec 13 14:24:48.167000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:24:48.167000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:24:48.169455 systemd[1]: Starting systemd-udevd.service... Dec 13 14:24:48.185828 systemd-udevd[1027]: Using default interface naming scheme 'v252'. Dec 13 14:24:48.199260 systemd[1]: Started systemd-udevd.service. Dec 13 14:24:48.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:48.201000 audit: BPF prog-id=26 op=LOAD Dec 13 14:24:48.202843 systemd[1]: Starting systemd-networkd.service... Dec 13 14:24:48.206000 audit: BPF prog-id=27 op=LOAD Dec 13 14:24:48.206000 audit: BPF prog-id=28 op=LOAD Dec 13 14:24:48.206000 audit: BPF prog-id=29 op=LOAD Dec 13 14:24:48.208456 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:24:48.233485 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:24:48.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:48.253321 systemd[1]: Started systemd-userdbd.service. Dec 13 14:24:48.262123 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:24:48.273749 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 14:24:48.278752 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:24:48.288000 audit[1051]: AVC avc: denied { confidentiality } for pid=1051 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:24:48.288000 audit[1051]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555f8599f1e0 a1=337fc a2=7f6e75f8bbc5 a3=5 items=110 ppid=1027 pid=1051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:24:48.288000 audit: CWD cwd="/" Dec 13 14:24:48.288000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=1 name=(null) inode=15262 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=2 name=(null) inode=15262 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=3 name=(null) inode=15263 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=4 name=(null) inode=15262 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=5 name=(null) inode=15264 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=6 name=(null) inode=15262 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=7 name=(null) inode=15265 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=8 name=(null) inode=15265 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=9 name=(null) inode=15266 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=10 name=(null) inode=15265 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=11 name=(null) inode=15267 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=12 name=(null) inode=15265 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=13 name=(null) inode=15268 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=14 name=(null) inode=15265 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=15 name=(null) inode=15269 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=16 name=(null) inode=15265 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=17 name=(null) inode=15270 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=18 name=(null) inode=15262 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=19 name=(null) inode=15271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=20 name=(null) inode=15271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=21 name=(null) inode=15272 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=22 name=(null) inode=15271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=23 name=(null) inode=15273 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=24 name=(null) inode=15271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=25 name=(null) inode=15274 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=26 name=(null) inode=15271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=27 name=(null) inode=15275 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=28 name=(null) inode=15271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=29 name=(null) inode=15276 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=30 name=(null) inode=15262 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=31 name=(null) inode=15277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=32 name=(null) inode=15277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=33 name=(null) inode=15278 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=34 name=(null) inode=15277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=35 name=(null) inode=15279 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=36 name=(null) inode=15277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=37 name=(null) inode=15280 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=38 name=(null) inode=15277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=39 name=(null) inode=15281 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=40 name=(null) inode=15277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=41 name=(null) inode=15282 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=42 name=(null) inode=15262 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=43 name=(null) inode=15283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=44 name=(null) inode=15283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=45 name=(null) inode=15284 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=46 name=(null) inode=15283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=47 name=(null) inode=15285 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=48 name=(null) inode=15283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=49 name=(null) inode=15286 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=50 name=(null) inode=15283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=51 name=(null) inode=15287 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=52 name=(null) inode=15283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=53 name=(null) inode=15288 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=55 name=(null) inode=15289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=56 name=(null) inode=15289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=57 name=(null) inode=15290 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=58 name=(null) inode=15289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=59 name=(null) inode=15291 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=60 name=(null) inode=15289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=61 name=(null) inode=15292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=62 name=(null) inode=15292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=63 name=(null) inode=15293 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=64 name=(null) inode=15292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=65 name=(null) inode=15294 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=66 name=(null) inode=15292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=67 name=(null) inode=15295 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=68 name=(null) inode=15292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=69 name=(null) inode=15296 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=70 name=(null) inode=15292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=71 name=(null) inode=15297 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=72 name=(null) inode=15289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=73 name=(null) inode=15298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=74 name=(null) inode=15298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=75 name=(null) inode=15299 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=76 name=(null) inode=15298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=77 name=(null) inode=15300 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=78 name=(null) inode=15298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=79 name=(null) inode=15301 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=80 name=(null) inode=15298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=81 name=(null) inode=15302 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=82 name=(null) inode=15298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=83 name=(null) inode=15303 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=84 name=(null) inode=15289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=85 name=(null) inode=15304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=86 name=(null) inode=15304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=87 name=(null) inode=15305 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=88 name=(null) inode=15304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=89 name=(null) inode=15306 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=90 name=(null) inode=15304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=91 name=(null) inode=15307 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=92 name=(null) inode=15304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=93 name=(null) inode=15308 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=94 name=(null) inode=15304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=95 name=(null) inode=15309 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=96 name=(null) inode=15289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=97 name=(null) inode=15310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=98 name=(null) inode=15310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=99 name=(null) inode=15311 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=100 name=(null) inode=15310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=101 name=(null) inode=15312 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=102 name=(null) inode=15310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=103 name=(null) inode=15313 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=104 name=(null) inode=15310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=105 name=(null) inode=15314 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=106 name=(null) inode=15310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=107 name=(null) inode=15315 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PATH item=109 name=(null) inode=15316 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:48.288000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:24:48.316818 systemd-networkd[1039]: lo: Link UP Dec 13 14:24:48.316834 systemd-networkd[1039]: lo: Gained carrier Dec 13 14:24:48.317365 systemd-networkd[1039]: Enumeration completed Dec 13 14:24:48.317481 systemd[1]: Started systemd-networkd.service. Dec 13 14:24:48.317492 systemd-networkd[1039]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:24:48.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:48.321396 systemd-networkd[1039]: eth0: Link UP Dec 13 14:24:48.321407 systemd-networkd[1039]: eth0: Gained carrier Dec 13 14:24:48.324767 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 14:24:48.330751 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:24:48.334775 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 14:24:48.340898 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 14:24:48.341025 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 14:24:48.341165 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 14:24:48.337877 systemd-networkd[1039]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:24:48.388927 kernel: kvm: Nested Virtualization enabled Dec 13 14:24:48.389023 kernel: SVM: kvm: Nested Paging enabled Dec 13 14:24:48.389828 kernel: SVM: Virtual VMLOAD VMSAVE supported Dec 13 14:24:48.389943 kernel: SVM: Virtual GIF supported Dec 13 14:24:48.408765 kernel: EDAC MC: Ver: 3.0.0 Dec 13 14:24:48.433105 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:24:48.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:48.435128 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:24:48.443035 lvm[1063]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:24:48.468787 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:24:48.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:48.470559 systemd[1]: Reached target cryptsetup.target. Dec 13 14:24:48.472805 systemd[1]: Starting lvm2-activation.service... Dec 13 14:24:48.476053 lvm[1064]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:24:48.500503 systemd[1]: Finished lvm2-activation.service. Dec 13 14:24:48.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:48.501706 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:24:48.502770 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:24:48.502793 systemd[1]: Reached target local-fs.target. Dec 13 14:24:48.503830 systemd[1]: Reached target machines.target. Dec 13 14:24:48.505940 systemd[1]: Starting ldconfig.service... Dec 13 14:24:48.507099 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:24:48.507162 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:24:48.508167 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:24:48.510679 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:24:48.513431 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:24:48.515994 systemd[1]: Starting systemd-sysext.service... Dec 13 14:24:48.517389 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1066 (bootctl) Dec 13 14:24:48.518777 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:24:48.526221 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:24:48.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:48.528884 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:24:48.532988 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:24:48.533159 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:24:48.716805 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 14:24:48.723902 systemd-fsck[1074]: fsck.fat 4.2 (2021-01-31) Dec 13 14:24:48.723902 systemd-fsck[1074]: /dev/vda1: 790 files, 119311/258078 clusters Dec 13 14:24:48.725823 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:24:48.726853 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:24:48.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:48.730313 systemd[1]: Mounting boot.mount... Dec 13 14:24:48.739159 systemd[1]: Mounted boot.mount. Dec 13 14:24:48.742772 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 14:24:48.748048 (sd-sysext)[1079]: Using extensions 'kubernetes'. Dec 13 14:24:48.748938 (sd-sysext)[1079]: Merged extensions into '/usr'. Dec 13 14:24:48.755484 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:24:48.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:48.777239 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:24:48.779031 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:24:48.789792 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:24:48.791234 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:24:48.793497 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:24:48.795414 systemd[1]: Starting modprobe@loop.service... Dec 13 14:24:48.796399 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:24:48.796515 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:24:48.796613 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:24:48.799297 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:24:48.800633 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:24:48.800787 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:24:48.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:48.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:48.802307 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:24:48.802412 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:24:48.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:48.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:48.804227 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:24:48.804326 systemd[1]: Finished modprobe@loop.service. Dec 13 14:24:48.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:48.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:48.805793 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:24:48.805891 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:24:48.806846 systemd[1]: Finished systemd-sysext.service. Dec 13 14:24:48.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:48.808957 systemd[1]: Starting ensure-sysext.service... Dec 13 14:24:48.810841 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:24:48.815329 systemd[1]: Reloading. Dec 13 14:24:48.822648 systemd-tmpfiles[1086]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:24:48.824902 systemd-tmpfiles[1086]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:24:48.828257 systemd-tmpfiles[1086]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:24:48.847799 ldconfig[1065]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:24:48.876275 /usr/lib/systemd/system-generators/torcx-generator[1105]: time="2024-12-13T14:24:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:24:48.876685 /usr/lib/systemd/system-generators/torcx-generator[1105]: time="2024-12-13T14:24:48Z" level=info msg="torcx already run" Dec 13 14:24:48.954612 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:24:48.954635 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:24:48.975544 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:24:49.030542 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:24:49.033000 audit: BPF prog-id=30 op=LOAD Dec 13 14:24:49.033000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:24:49.033000 audit: BPF prog-id=31 op=LOAD Dec 13 14:24:49.033000 audit: BPF prog-id=32 op=LOAD Dec 13 14:24:49.033000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:24:49.033000 audit: BPF prog-id=29 op=UNLOAD Dec 13 14:24:49.034000 audit: BPF prog-id=33 op=LOAD Dec 13 14:24:49.034000 audit: BPF prog-id=34 op=LOAD Dec 13 14:24:49.034000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:24:49.034000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:24:49.035000 audit: BPF prog-id=35 op=LOAD Dec 13 14:24:49.035000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:24:49.035000 audit: BPF prog-id=36 op=LOAD Dec 13 14:24:49.035000 audit: BPF prog-id=37 op=LOAD Dec 13 14:24:49.035000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:24:49.035000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:24:49.036000 audit: BPF prog-id=38 op=LOAD Dec 13 14:24:49.036000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:24:49.040456 systemd[1]: Finished ldconfig.service. Dec 13 14:24:49.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:49.041639 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:24:49.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:49.043754 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:24:49.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:49.047402 systemd[1]: Starting audit-rules.service... Dec 13 14:24:49.049285 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:24:49.051674 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:24:49.052000 audit: BPF prog-id=39 op=LOAD Dec 13 14:24:49.055000 audit: BPF prog-id=40 op=LOAD Dec 13 14:24:49.054432 systemd[1]: Starting systemd-resolved.service... Dec 13 14:24:49.057129 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:24:49.059464 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:24:49.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:49.061120 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:24:49.064684 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:24:49.069000 audit[1159]: SYSTEM_BOOT pid=1159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:24:49.067348 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:24:49.067672 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:24:49.069174 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:24:49.071325 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:24:49.073443 systemd[1]: Starting modprobe@loop.service... Dec 13 14:24:49.076061 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:24:49.076237 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:24:49.076370 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:24:49.076475 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:24:49.077919 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:24:49.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:49.079688 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:24:49.079881 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:24:49.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:49.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:49.081469 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:24:49.081643 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:24:49.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:49.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:49.083322 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:24:49.083505 systemd[1]: Finished modprobe@loop.service. Dec 13 14:24:49.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:49.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:49.089908 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:24:49.090231 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:24:49.091939 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:24:49.094196 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:24:49.096204 systemd[1]: Starting modprobe@loop.service... Dec 13 14:24:49.097154 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:24:49.097377 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:24:49.098713 systemd[1]: Starting systemd-update-done.service... Dec 13 14:24:49.099791 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:24:49.100001 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:24:49.102110 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:24:49.102272 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:24:49.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:49.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:49.106000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:24:49.106000 audit[1171]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc99bc1650 a2=420 a3=0 items=0 ppid=1148 pid=1171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:24:49.106000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:24:49.107341 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:24:49.112464 augenrules[1171]: No rules Dec 13 14:24:49.107493 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:24:49.109359 systemd[1]: Finished audit-rules.service. Dec 13 14:24:49.111010 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:24:49.111146 systemd[1]: Finished modprobe@loop.service. Dec 13 14:24:49.113326 systemd[1]: Finished systemd-update-done.service. Dec 13 14:24:49.119289 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:24:49.121632 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:24:49.122005 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:24:49.123879 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:24:49.125985 systemd[1]: Starting modprobe@drm.service... Dec 13 14:24:49.128321 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:24:49.130832 systemd[1]: Starting modprobe@loop.service... Dec 13 14:24:49.132032 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:24:49.132185 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:24:49.133167 systemd-timesyncd[1156]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 14:24:49.133220 systemd-timesyncd[1156]: Initial clock synchronization to Fri 2024-12-13 14:24:49.171301 UTC. Dec 13 14:24:49.133512 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:24:49.135015 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:24:49.135161 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:24:49.136422 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:24:49.137161 systemd-resolved[1152]: Positive Trust Anchors: Dec 13 14:24:49.137174 systemd-resolved[1152]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:24:49.137209 systemd-resolved[1152]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:24:49.138663 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:24:49.138855 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:24:49.140335 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:24:49.140447 systemd[1]: Finished modprobe@drm.service. Dec 13 14:24:49.142017 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:24:49.142126 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:24:49.143527 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:24:49.143631 systemd[1]: Finished modprobe@loop.service. Dec 13 14:24:49.144824 systemd-resolved[1152]: Defaulting to hostname 'linux'. Dec 13 14:24:49.145187 systemd[1]: Reached target time-set.target. Dec 13 14:24:49.146122 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:24:49.146154 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:24:49.146412 systemd[1]: Finished ensure-sysext.service. Dec 13 14:24:49.147566 systemd[1]: Started systemd-resolved.service. Dec 13 14:24:49.149140 systemd[1]: Reached target network.target. Dec 13 14:24:49.150088 systemd[1]: Reached target nss-lookup.target. Dec 13 14:24:49.150986 systemd[1]: Reached target sysinit.target. Dec 13 14:24:49.151890 systemd[1]: Started motdgen.path. Dec 13 14:24:49.152650 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:24:49.153939 systemd[1]: Started logrotate.timer. Dec 13 14:24:49.154817 systemd[1]: Started mdadm.timer. Dec 13 14:24:49.156887 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:24:49.157823 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:24:49.157848 systemd[1]: Reached target paths.target. Dec 13 14:24:49.158851 systemd[1]: Reached target timers.target. Dec 13 14:24:49.159987 systemd[1]: Listening on dbus.socket. Dec 13 14:24:49.161686 systemd[1]: Starting docker.socket... Dec 13 14:24:49.164404 systemd[1]: Listening on sshd.socket. Dec 13 14:24:49.165397 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:24:49.165823 systemd[1]: Listening on docker.socket. Dec 13 14:24:49.166783 systemd[1]: Reached target sockets.target. Dec 13 14:24:49.168909 systemd[1]: Reached target basic.target. Dec 13 14:24:49.169810 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:24:49.169840 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:24:49.170707 systemd[1]: Starting containerd.service... Dec 13 14:24:49.172406 systemd[1]: Starting dbus.service... Dec 13 14:24:49.174003 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:24:49.175841 systemd[1]: Starting extend-filesystems.service... Dec 13 14:24:49.177038 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:24:49.178389 jq[1190]: false Dec 13 14:24:49.177948 systemd[1]: Starting motdgen.service... Dec 13 14:24:49.179673 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:24:49.181876 systemd[1]: Starting sshd-keygen.service... Dec 13 14:24:49.185813 systemd[1]: Starting systemd-logind.service... Dec 13 14:24:49.187031 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:24:49.187112 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:24:49.187630 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:24:49.189266 systemd[1]: Starting update-engine.service... Dec 13 14:24:49.192570 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:24:49.197133 jq[1207]: true Dec 13 14:24:49.198519 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:24:49.198762 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:24:49.199205 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:24:49.199384 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:24:49.210746 systemd[1]: Started dbus.service. Dec 13 14:24:49.211244 jq[1210]: true Dec 13 14:24:49.210506 dbus-daemon[1189]: [system] SELinux support is enabled Dec 13 14:24:49.211611 extend-filesystems[1191]: Found loop1 Dec 13 14:24:49.211611 extend-filesystems[1191]: Found sr0 Dec 13 14:24:49.211611 extend-filesystems[1191]: Found vda Dec 13 14:24:49.211611 extend-filesystems[1191]: Found vda1 Dec 13 14:24:49.211611 extend-filesystems[1191]: Found vda2 Dec 13 14:24:49.211611 extend-filesystems[1191]: Found vda3 Dec 13 14:24:49.211611 extend-filesystems[1191]: Found usr Dec 13 14:24:49.211611 extend-filesystems[1191]: Found vda4 Dec 13 14:24:49.211611 extend-filesystems[1191]: Found vda6 Dec 13 14:24:49.211611 extend-filesystems[1191]: Found vda7 Dec 13 14:24:49.211611 extend-filesystems[1191]: Found vda9 Dec 13 14:24:49.211611 extend-filesystems[1191]: Checking size of /dev/vda9 Dec 13 14:24:49.225512 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:24:49.225553 systemd[1]: Reached target system-config.target. Dec 13 14:24:49.226947 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:24:49.226979 systemd[1]: Reached target user-config.target. Dec 13 14:24:49.228763 extend-filesystems[1191]: Resized partition /dev/vda9 Dec 13 14:24:49.230069 extend-filesystems[1235]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:24:49.231475 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:24:49.231864 systemd[1]: Finished motdgen.service. Dec 13 14:24:49.232579 update_engine[1203]: I1213 14:24:49.232337 1203 main.cc:92] Flatcar Update Engine starting Dec 13 14:24:49.253488 env[1211]: time="2024-12-13T14:24:49.253427137Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:24:49.269080 env[1211]: time="2024-12-13T14:24:49.269034797Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:24:49.269355 env[1211]: time="2024-12-13T14:24:49.269332886Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:24:49.270856 env[1211]: time="2024-12-13T14:24:49.270828651Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:24:49.270955 env[1211]: time="2024-12-13T14:24:49.270935001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:24:49.271237 env[1211]: time="2024-12-13T14:24:49.271213613Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:24:49.271316 env[1211]: time="2024-12-13T14:24:49.271296960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:24:49.271403 env[1211]: time="2024-12-13T14:24:49.271381488Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:24:49.271491 env[1211]: time="2024-12-13T14:24:49.271470675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:24:49.271635 env[1211]: time="2024-12-13T14:24:49.271615878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:24:49.271979 env[1211]: time="2024-12-13T14:24:49.271943552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:24:49.272212 env[1211]: time="2024-12-13T14:24:49.272188321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:24:49.272296 env[1211]: time="2024-12-13T14:24:49.272274513Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:24:49.272428 env[1211]: time="2024-12-13T14:24:49.272401932Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:24:49.272514 env[1211]: time="2024-12-13T14:24:49.272494736Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:24:49.284116 systemd[1]: Started update-engine.service. Dec 13 14:24:49.294467 update_engine[1203]: I1213 14:24:49.284233 1203 update_check_scheduler.cc:74] Next update check in 11m56s Dec 13 14:24:49.287079 systemd[1]: Started locksmithd.service. Dec 13 14:24:49.295276 systemd-logind[1201]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:24:49.295312 systemd-logind[1201]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:24:49.296426 systemd-logind[1201]: New seat seat0. Dec 13 14:24:49.299745 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 14:24:49.300285 systemd[1]: Started systemd-logind.service. Dec 13 14:24:49.569280 sshd_keygen[1204]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:24:49.586325 locksmithd[1242]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:24:49.589409 systemd[1]: Finished sshd-keygen.service. Dec 13 14:24:49.591815 systemd[1]: Starting issuegen.service... Dec 13 14:24:49.597373 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:24:49.597498 systemd[1]: Finished issuegen.service. Dec 13 14:24:49.599512 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:24:49.688765 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 14:24:49.691568 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:24:49.693988 systemd[1]: Started getty@tty1.service. Dec 13 14:24:49.695839 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:24:49.696903 systemd[1]: Reached target getty.target. Dec 13 14:24:49.877521 extend-filesystems[1235]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:24:49.877521 extend-filesystems[1235]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:24:49.877521 extend-filesystems[1235]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 14:24:49.882671 extend-filesystems[1191]: Resized filesystem in /dev/vda9 Dec 13 14:24:49.878833 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:24:49.879064 systemd[1]: Finished extend-filesystems.service. Dec 13 14:24:49.884323 env[1211]: time="2024-12-13T14:24:49.884282717Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:24:49.884356 env[1211]: time="2024-12-13T14:24:49.884329505Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:24:49.884356 env[1211]: time="2024-12-13T14:24:49.884341798Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:24:49.884394 env[1211]: time="2024-12-13T14:24:49.884376783Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:24:49.884394 env[1211]: time="2024-12-13T14:24:49.884389898Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:24:49.884436 env[1211]: time="2024-12-13T14:24:49.884401720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:24:49.884436 env[1211]: time="2024-12-13T14:24:49.884413462Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:24:49.884436 env[1211]: time="2024-12-13T14:24:49.884425605Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:24:49.884494 env[1211]: time="2024-12-13T14:24:49.884440352Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:24:49.884494 env[1211]: time="2024-12-13T14:24:49.884455270Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:24:49.884494 env[1211]: time="2024-12-13T14:24:49.884466632Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:24:49.884494 env[1211]: time="2024-12-13T14:24:49.884477733Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:24:49.884612 env[1211]: time="2024-12-13T14:24:49.884599371Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:24:49.884715 env[1211]: time="2024-12-13T14:24:49.884683659Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:24:49.885048 env[1211]: time="2024-12-13T14:24:49.885019969Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:24:49.885077 env[1211]: time="2024-12-13T14:24:49.885062098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:24:49.885098 env[1211]: time="2024-12-13T14:24:49.885080663Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:24:49.885166 env[1211]: time="2024-12-13T14:24:49.885144513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:24:49.885190 env[1211]: time="2024-12-13T14:24:49.885169199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:24:49.885210 env[1211]: time="2024-12-13T14:24:49.885185620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:24:49.885210 env[1211]: time="2024-12-13T14:24:49.885202351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:24:49.885251 env[1211]: time="2024-12-13T14:24:49.885218963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:24:49.885272 env[1211]: time="2024-12-13T14:24:49.885255321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:24:49.885292 env[1211]: time="2024-12-13T14:24:49.885273365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:24:49.885311 env[1211]: time="2024-12-13T14:24:49.885287762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:24:49.885337 env[1211]: time="2024-12-13T14:24:49.885306206Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:24:49.885491 env[1211]: time="2024-12-13T14:24:49.885464273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:24:49.885531 env[1211]: time="2024-12-13T14:24:49.885492926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:24:49.885531 env[1211]: time="2024-12-13T14:24:49.885508105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:24:49.885531 env[1211]: time="2024-12-13T14:24:49.885521420Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:24:49.885598 env[1211]: time="2024-12-13T14:24:49.885541017Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:24:49.885598 env[1211]: time="2024-12-13T14:24:49.885554742Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:24:49.885598 env[1211]: time="2024-12-13T14:24:49.885579148Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:24:49.885660 env[1211]: time="2024-12-13T14:24:49.885627068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:24:49.886502 env[1211]: time="2024-12-13T14:24:49.885885803Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:24:49.886502 env[1211]: time="2024-12-13T14:24:49.886161721Z" level=info msg="Connect containerd service" Dec 13 14:24:49.886502 env[1211]: time="2024-12-13T14:24:49.886251409Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:24:49.887577 env[1211]: time="2024-12-13T14:24:49.887505140Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:24:49.887757 env[1211]: time="2024-12-13T14:24:49.887694084Z" level=info msg="Start subscribing containerd event" Dec 13 14:24:49.887793 env[1211]: time="2024-12-13T14:24:49.887776218Z" level=info msg="Start recovering state" Dec 13 14:24:49.887863 env[1211]: time="2024-12-13T14:24:49.887848744Z" level=info msg="Start event monitor" Dec 13 14:24:49.887891 env[1211]: time="2024-12-13T14:24:49.887870756Z" level=info msg="Start snapshots syncer" Dec 13 14:24:49.887891 env[1211]: time="2024-12-13T14:24:49.887881927Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:24:49.887891 env[1211]: time="2024-12-13T14:24:49.887890964Z" level=info msg="Start streaming server" Dec 13 14:24:49.887974 env[1211]: time="2024-12-13T14:24:49.887906122Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:24:49.887974 env[1211]: time="2024-12-13T14:24:49.887955164Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:24:49.888150 systemd[1]: Started containerd.service. Dec 13 14:24:49.888355 env[1211]: time="2024-12-13T14:24:49.888278270Z" level=info msg="containerd successfully booted in 0.635425s" Dec 13 14:24:49.889516 bash[1236]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:24:49.890408 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:24:50.268065 systemd-networkd[1039]: eth0: Gained IPv6LL Dec 13 14:24:50.269924 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:24:50.271442 systemd[1]: Reached target network-online.target. Dec 13 14:24:50.273910 systemd[1]: Starting kubelet.service... Dec 13 14:24:50.903360 systemd[1]: Started kubelet.service. Dec 13 14:24:50.904995 systemd[1]: Reached target multi-user.target. Dec 13 14:24:50.907089 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:24:50.913885 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:24:50.914068 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:24:50.915352 systemd[1]: Startup finished in 1.008s (kernel) + 4.433s (initrd) + 6.602s (userspace) = 12.044s. Dec 13 14:24:51.327608 kubelet[1266]: E1213 14:24:51.327459 1266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:24:51.329115 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:24:51.329260 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:24:53.255742 systemd[1]: Created slice system-sshd.slice. Dec 13 14:24:53.257188 systemd[1]: Started sshd@0-10.0.0.88:22-10.0.0.1:55940.service. Dec 13 14:24:53.295392 sshd[1275]: Accepted publickey for core from 10.0.0.1 port 55940 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:24:53.297633 sshd[1275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:24:53.309609 systemd-logind[1201]: New session 1 of user core. Dec 13 14:24:53.310707 systemd[1]: Created slice user-500.slice. Dec 13 14:24:53.311895 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:24:53.323008 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:24:53.325008 systemd[1]: Starting user@500.service... Dec 13 14:24:53.328486 (systemd)[1278]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:24:53.431871 systemd[1278]: Queued start job for default target default.target. Dec 13 14:24:53.432629 systemd[1278]: Reached target paths.target. Dec 13 14:24:53.432662 systemd[1278]: Reached target sockets.target. Dec 13 14:24:53.432681 systemd[1278]: Reached target timers.target. Dec 13 14:24:53.432696 systemd[1278]: Reached target basic.target. Dec 13 14:24:53.432785 systemd[1278]: Reached target default.target. Dec 13 14:24:53.432819 systemd[1278]: Startup finished in 96ms. Dec 13 14:24:53.432905 systemd[1]: Started user@500.service. Dec 13 14:24:53.434315 systemd[1]: Started session-1.scope. Dec 13 14:24:53.487387 systemd[1]: Started sshd@1-10.0.0.88:22-10.0.0.1:55952.service. Dec 13 14:24:53.521279 sshd[1287]: Accepted publickey for core from 10.0.0.1 port 55952 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:24:53.522580 sshd[1287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:24:53.526952 systemd-logind[1201]: New session 2 of user core. Dec 13 14:24:53.528167 systemd[1]: Started session-2.scope. Dec 13 14:24:53.586464 sshd[1287]: pam_unix(sshd:session): session closed for user core Dec 13 14:24:53.590328 systemd[1]: sshd@1-10.0.0.88:22-10.0.0.1:55952.service: Deactivated successfully. Dec 13 14:24:53.591067 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:24:53.591688 systemd-logind[1201]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:24:53.593108 systemd[1]: Started sshd@2-10.0.0.88:22-10.0.0.1:55956.service. Dec 13 14:24:53.594012 systemd-logind[1201]: Removed session 2. Dec 13 14:24:53.627321 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 55956 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:24:53.629005 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:24:53.633660 systemd-logind[1201]: New session 3 of user core. Dec 13 14:24:53.634594 systemd[1]: Started session-3.scope. Dec 13 14:24:53.686565 sshd[1293]: pam_unix(sshd:session): session closed for user core Dec 13 14:24:53.689467 systemd[1]: sshd@2-10.0.0.88:22-10.0.0.1:55956.service: Deactivated successfully. Dec 13 14:24:53.690021 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:24:53.690519 systemd-logind[1201]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:24:53.691596 systemd[1]: Started sshd@3-10.0.0.88:22-10.0.0.1:55960.service. Dec 13 14:24:53.692282 systemd-logind[1201]: Removed session 3. Dec 13 14:24:53.725035 sshd[1299]: Accepted publickey for core from 10.0.0.1 port 55960 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:24:53.726212 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:24:53.730352 systemd-logind[1201]: New session 4 of user core. Dec 13 14:24:53.731453 systemd[1]: Started session-4.scope. Dec 13 14:24:53.787854 sshd[1299]: pam_unix(sshd:session): session closed for user core Dec 13 14:24:53.790916 systemd[1]: sshd@3-10.0.0.88:22-10.0.0.1:55960.service: Deactivated successfully. Dec 13 14:24:53.791473 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:24:53.792052 systemd-logind[1201]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:24:53.793482 systemd[1]: Started sshd@4-10.0.0.88:22-10.0.0.1:55962.service. Dec 13 14:24:53.794341 systemd-logind[1201]: Removed session 4. Dec 13 14:24:53.828068 sshd[1306]: Accepted publickey for core from 10.0.0.1 port 55962 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:24:53.829248 sshd[1306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:24:53.833035 systemd-logind[1201]: New session 5 of user core. Dec 13 14:24:53.833833 systemd[1]: Started session-5.scope. Dec 13 14:24:53.891601 sudo[1309]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:24:53.891817 sudo[1309]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:24:53.904258 systemd[1]: Starting coreos-metadata.service... Dec 13 14:24:53.912790 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 14:24:53.912967 systemd[1]: Finished coreos-metadata.service. Dec 13 14:24:54.354668 systemd[1]: Stopped kubelet.service. Dec 13 14:24:54.357021 systemd[1]: Starting kubelet.service... Dec 13 14:24:54.380124 systemd[1]: Reloading. Dec 13 14:24:54.444820 /usr/lib/systemd/system-generators/torcx-generator[1366]: time="2024-12-13T14:24:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:24:54.444863 /usr/lib/systemd/system-generators/torcx-generator[1366]: time="2024-12-13T14:24:54Z" level=info msg="torcx already run" Dec 13 14:24:54.691057 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:24:54.691081 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:24:54.715797 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:24:54.809941 systemd[1]: Started kubelet.service. Dec 13 14:24:54.813897 systemd[1]: Stopping kubelet.service... Dec 13 14:24:54.814216 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:24:54.814436 systemd[1]: Stopped kubelet.service. Dec 13 14:24:54.816180 systemd[1]: Starting kubelet.service... Dec 13 14:24:54.913280 systemd[1]: Started kubelet.service. Dec 13 14:24:55.123346 kubelet[1419]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:24:55.123346 kubelet[1419]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:24:55.123346 kubelet[1419]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:24:55.123346 kubelet[1419]: I1213 14:24:55.123313 1419 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:24:55.298748 kubelet[1419]: I1213 14:24:55.298649 1419 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 14:24:55.298748 kubelet[1419]: I1213 14:24:55.298686 1419 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:24:55.299002 kubelet[1419]: I1213 14:24:55.298958 1419 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 14:24:55.319349 kubelet[1419]: I1213 14:24:55.319297 1419 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:24:55.328867 kubelet[1419]: E1213 14:24:55.328795 1419 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 14:24:55.328867 kubelet[1419]: I1213 14:24:55.328856 1419 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 14:24:55.334118 kubelet[1419]: I1213 14:24:55.334082 1419 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:24:55.335045 kubelet[1419]: I1213 14:24:55.335004 1419 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 14:24:55.335196 kubelet[1419]: I1213 14:24:55.335156 1419 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:24:55.335376 kubelet[1419]: I1213 14:24:55.335187 1419 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.88","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 14:24:55.335376 kubelet[1419]: I1213 14:24:55.335374 1419 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:24:55.335502 kubelet[1419]: I1213 14:24:55.335383 1419 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 14:24:55.335502 kubelet[1419]: I1213 14:24:55.335492 1419 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:24:55.337881 kubelet[1419]: I1213 14:24:55.337846 1419 kubelet.go:408] "Attempting to sync node with API server" Dec 13 14:24:55.337881 kubelet[1419]: I1213 14:24:55.337878 1419 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:24:55.337951 kubelet[1419]: I1213 14:24:55.337919 1419 kubelet.go:314] "Adding apiserver pod source" Dec 13 14:24:55.337951 kubelet[1419]: I1213 14:24:55.337940 1419 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:24:55.338017 kubelet[1419]: E1213 14:24:55.337988 1419 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:55.338066 kubelet[1419]: E1213 14:24:55.338051 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:55.352098 kubelet[1419]: W1213 14:24:55.352043 1419 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:24:55.352190 kubelet[1419]: E1213 14:24:55.352106 1419 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 14:24:55.354847 kubelet[1419]: I1213 14:24:55.354822 1419 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:24:55.355539 kubelet[1419]: W1213 14:24:55.355506 1419 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.88" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:24:55.355539 kubelet[1419]: E1213 14:24:55.355539 1419 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.88\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 14:24:55.356621 kubelet[1419]: I1213 14:24:55.356596 1419 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:24:55.357110 kubelet[1419]: W1213 14:24:55.357088 1419 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:24:55.357634 kubelet[1419]: I1213 14:24:55.357610 1419 server.go:1269] "Started kubelet" Dec 13 14:24:55.358092 kubelet[1419]: I1213 14:24:55.358050 1419 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:24:55.365237 kubelet[1419]: I1213 14:24:55.365205 1419 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:24:55.365318 kubelet[1419]: I1213 14:24:55.365284 1419 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:24:55.366856 kubelet[1419]: I1213 14:24:55.366826 1419 server.go:460] "Adding debug handlers to kubelet server" Dec 13 14:24:55.367590 kubelet[1419]: E1213 14:24:55.367155 1419 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:24:55.368449 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:24:55.368557 kubelet[1419]: I1213 14:24:55.368529 1419 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:24:55.368916 kubelet[1419]: I1213 14:24:55.368861 1419 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 14:24:55.369065 kubelet[1419]: I1213 14:24:55.369028 1419 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 14:24:55.369200 kubelet[1419]: I1213 14:24:55.369179 1419 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 14:24:55.369315 kubelet[1419]: I1213 14:24:55.369289 1419 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:24:55.369488 kubelet[1419]: I1213 14:24:55.369460 1419 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:24:55.369847 kubelet[1419]: E1213 14:24:55.369818 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Dec 13 14:24:55.370835 kubelet[1419]: I1213 14:24:55.370814 1419 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:24:55.370835 kubelet[1419]: I1213 14:24:55.370827 1419 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:24:55.380097 kubelet[1419]: E1213 14:24:55.379985 1419 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.88\" not found" node="10.0.0.88" Dec 13 14:24:55.386413 kubelet[1419]: I1213 14:24:55.386390 1419 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:24:55.386558 kubelet[1419]: I1213 14:24:55.386538 1419 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:24:55.386689 kubelet[1419]: I1213 14:24:55.386656 1419 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:24:55.389841 kubelet[1419]: I1213 14:24:55.389805 1419 policy_none.go:49] "None policy: Start" Dec 13 14:24:55.390536 kubelet[1419]: I1213 14:24:55.390521 1419 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:24:55.390632 kubelet[1419]: I1213 14:24:55.390609 1419 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:24:55.397070 systemd[1]: Created slice kubepods.slice. Dec 13 14:24:55.401968 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:24:55.405089 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:24:55.412481 kubelet[1419]: I1213 14:24:55.412455 1419 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:24:55.412627 kubelet[1419]: I1213 14:24:55.412606 1419 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 14:24:55.412677 kubelet[1419]: I1213 14:24:55.412625 1419 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:24:55.413142 kubelet[1419]: I1213 14:24:55.412888 1419 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:24:55.414041 kubelet[1419]: E1213 14:24:55.414027 1419 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.88\" not found" Dec 13 14:24:55.465848 kubelet[1419]: I1213 14:24:55.465785 1419 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:24:55.466836 kubelet[1419]: I1213 14:24:55.466786 1419 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:24:55.467001 kubelet[1419]: I1213 14:24:55.466853 1419 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:24:55.467001 kubelet[1419]: I1213 14:24:55.466897 1419 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 14:24:55.467001 kubelet[1419]: E1213 14:24:55.466955 1419 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 14:24:55.514099 kubelet[1419]: I1213 14:24:55.514042 1419 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.88" Dec 13 14:24:55.520368 kubelet[1419]: I1213 14:24:55.520266 1419 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.88" Dec 13 14:24:55.520466 kubelet[1419]: E1213 14:24:55.520381 1419 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.88\": node \"10.0.0.88\" not found" Dec 13 14:24:55.528042 kubelet[1419]: E1213 14:24:55.528004 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Dec 13 14:24:55.629236 kubelet[1419]: E1213 14:24:55.629118 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Dec 13 14:24:55.729642 kubelet[1419]: E1213 14:24:55.729368 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Dec 13 14:24:55.830474 kubelet[1419]: E1213 14:24:55.830384 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Dec 13 14:24:55.931489 kubelet[1419]: E1213 14:24:55.931418 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Dec 13 14:24:55.937049 sudo[1309]: pam_unix(sudo:session): session closed for user root Dec 13 14:24:55.939205 sshd[1306]: pam_unix(sshd:session): session closed for user core Dec 13 14:24:55.941791 systemd[1]: sshd@4-10.0.0.88:22-10.0.0.1:55962.service: Deactivated successfully. Dec 13 14:24:55.942585 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:24:55.943197 systemd-logind[1201]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:24:55.944053 systemd-logind[1201]: Removed session 5. Dec 13 14:24:56.032412 kubelet[1419]: E1213 14:24:56.032218 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Dec 13 14:24:56.133168 kubelet[1419]: E1213 14:24:56.133075 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Dec 13 14:24:56.234087 kubelet[1419]: E1213 14:24:56.234004 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Dec 13 14:24:56.301115 kubelet[1419]: I1213 14:24:56.300930 1419 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:24:56.301319 kubelet[1419]: W1213 14:24:56.301164 1419 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:24:56.301319 kubelet[1419]: W1213 14:24:56.301164 1419 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:24:56.301319 kubelet[1419]: W1213 14:24:56.301202 1419 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:24:56.334594 kubelet[1419]: E1213 14:24:56.334513 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Dec 13 14:24:56.338894 kubelet[1419]: E1213 14:24:56.338835 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:56.435350 kubelet[1419]: E1213 14:24:56.435257 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Dec 13 14:24:56.536476 kubelet[1419]: E1213 14:24:56.536398 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Dec 13 14:24:56.637629 kubelet[1419]: E1213 14:24:56.637474 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Dec 13 14:24:56.738584 kubelet[1419]: E1213 14:24:56.738494 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Dec 13 14:24:56.839817 kubelet[1419]: E1213 14:24:56.839682 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Dec 13 14:24:56.940536 kubelet[1419]: E1213 14:24:56.940324 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Dec 13 14:24:57.041480 kubelet[1419]: I1213 14:24:57.041418 1419 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:24:57.042034 env[1211]: time="2024-12-13T14:24:57.041956583Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:24:57.042368 kubelet[1419]: I1213 14:24:57.042170 1419 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:24:57.339090 kubelet[1419]: I1213 14:24:57.338872 1419 apiserver.go:52] "Watching apiserver" Dec 13 14:24:57.339600 kubelet[1419]: E1213 14:24:57.339195 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:57.352048 systemd[1]: Created slice kubepods-besteffort-pod35130931_71d2_488a_84f8_2826aa8541bd.slice. Dec 13 14:24:57.362402 systemd[1]: Created slice kubepods-burstable-pod56ca637a_6110_49ff_90b0_6cddf7e7fb82.slice. Dec 13 14:24:57.369681 kubelet[1419]: I1213 14:24:57.369630 1419 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 14:24:57.379772 kubelet[1419]: I1213 14:24:57.379716 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-cilium-run\") pod \"cilium-dfqp5\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " pod="kube-system/cilium-dfqp5" Dec 13 14:24:57.379772 kubelet[1419]: I1213 14:24:57.379770 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-hostproc\") pod \"cilium-dfqp5\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " pod="kube-system/cilium-dfqp5" Dec 13 14:24:57.379981 kubelet[1419]: I1213 14:24:57.379792 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-etc-cni-netd\") pod \"cilium-dfqp5\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " pod="kube-system/cilium-dfqp5" Dec 13 14:24:57.379981 kubelet[1419]: I1213 14:24:57.379820 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56ca637a-6110-49ff-90b0-6cddf7e7fb82-hubble-tls\") pod \"cilium-dfqp5\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " pod="kube-system/cilium-dfqp5" Dec 13 14:24:57.379981 kubelet[1419]: I1213 14:24:57.379842 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/35130931-71d2-488a-84f8-2826aa8541bd-kube-proxy\") pod \"kube-proxy-wdspv\" (UID: \"35130931-71d2-488a-84f8-2826aa8541bd\") " pod="kube-system/kube-proxy-wdspv" Dec 13 14:24:57.379981 kubelet[1419]: I1213 14:24:57.379886 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35130931-71d2-488a-84f8-2826aa8541bd-lib-modules\") pod \"kube-proxy-wdspv\" (UID: \"35130931-71d2-488a-84f8-2826aa8541bd\") " pod="kube-system/kube-proxy-wdspv" Dec 13 14:24:57.379981 kubelet[1419]: I1213 14:24:57.379903 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltp9w\" (UniqueName: \"kubernetes.io/projected/35130931-71d2-488a-84f8-2826aa8541bd-kube-api-access-ltp9w\") pod \"kube-proxy-wdspv\" (UID: \"35130931-71d2-488a-84f8-2826aa8541bd\") " pod="kube-system/kube-proxy-wdspv" Dec 13 14:24:57.379981 kubelet[1419]: I1213 14:24:57.379920 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-cilium-cgroup\") pod \"cilium-dfqp5\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " pod="kube-system/cilium-dfqp5" Dec 13 14:24:57.380130 kubelet[1419]: I1213 14:24:57.379940 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56ca637a-6110-49ff-90b0-6cddf7e7fb82-cilium-config-path\") pod \"cilium-dfqp5\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " pod="kube-system/cilium-dfqp5" Dec 13 14:24:57.380130 kubelet[1419]: I1213 14:24:57.380018 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-host-proc-sys-kernel\") pod \"cilium-dfqp5\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " pod="kube-system/cilium-dfqp5" Dec 13 14:24:57.380130 kubelet[1419]: I1213 14:24:57.380046 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-lib-modules\") pod \"cilium-dfqp5\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " pod="kube-system/cilium-dfqp5" Dec 13 14:24:57.380130 kubelet[1419]: I1213 14:24:57.380115 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrxjf\" (UniqueName: \"kubernetes.io/projected/56ca637a-6110-49ff-90b0-6cddf7e7fb82-kube-api-access-jrxjf\") pod \"cilium-dfqp5\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " pod="kube-system/cilium-dfqp5" Dec 13 14:24:57.380246 kubelet[1419]: I1213 14:24:57.380140 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-bpf-maps\") pod \"cilium-dfqp5\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " pod="kube-system/cilium-dfqp5" Dec 13 14:24:57.380246 kubelet[1419]: I1213 14:24:57.380159 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-cni-path\") pod \"cilium-dfqp5\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " pod="kube-system/cilium-dfqp5" Dec 13 14:24:57.380246 kubelet[1419]: I1213 14:24:57.380183 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-xtables-lock\") pod \"cilium-dfqp5\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " pod="kube-system/cilium-dfqp5" Dec 13 14:24:57.380246 kubelet[1419]: I1213 14:24:57.380201 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56ca637a-6110-49ff-90b0-6cddf7e7fb82-clustermesh-secrets\") pod \"cilium-dfqp5\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " pod="kube-system/cilium-dfqp5" Dec 13 14:24:57.380246 kubelet[1419]: I1213 14:24:57.380217 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-host-proc-sys-net\") pod \"cilium-dfqp5\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " pod="kube-system/cilium-dfqp5" Dec 13 14:24:57.380364 kubelet[1419]: I1213 14:24:57.380273 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35130931-71d2-488a-84f8-2826aa8541bd-xtables-lock\") pod \"kube-proxy-wdspv\" (UID: \"35130931-71d2-488a-84f8-2826aa8541bd\") " pod="kube-system/kube-proxy-wdspv" Dec 13 14:24:57.481388 kubelet[1419]: I1213 14:24:57.481288 1419 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 14:24:57.661543 kubelet[1419]: E1213 14:24:57.661360 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:24:57.663050 env[1211]: time="2024-12-13T14:24:57.662988454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wdspv,Uid:35130931-71d2-488a-84f8-2826aa8541bd,Namespace:kube-system,Attempt:0,}" Dec 13 14:24:57.680978 kubelet[1419]: E1213 14:24:57.680865 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:24:57.681569 env[1211]: time="2024-12-13T14:24:57.681522103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dfqp5,Uid:56ca637a-6110-49ff-90b0-6cddf7e7fb82,Namespace:kube-system,Attempt:0,}" Dec 13 14:24:58.339688 kubelet[1419]: E1213 14:24:58.339612 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:59.184281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1074450405.mount: Deactivated successfully. Dec 13 14:24:59.194028 env[1211]: time="2024-12-13T14:24:59.193935253Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:59.198472 env[1211]: time="2024-12-13T14:24:59.198390102Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:59.202799 env[1211]: time="2024-12-13T14:24:59.202706005Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:59.204793 env[1211]: time="2024-12-13T14:24:59.204691693Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:59.206322 env[1211]: time="2024-12-13T14:24:59.206240420Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:59.207678 env[1211]: time="2024-12-13T14:24:59.207630509Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:59.208774 env[1211]: time="2024-12-13T14:24:59.208596126Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:59.214308 env[1211]: time="2024-12-13T14:24:59.214257714Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:59.245872 env[1211]: time="2024-12-13T14:24:59.245245001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:24:59.245872 env[1211]: time="2024-12-13T14:24:59.245327325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:24:59.245872 env[1211]: time="2024-12-13T14:24:59.245351494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:24:59.247415 env[1211]: time="2024-12-13T14:24:59.247356544Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5 pid=1475 runtime=io.containerd.runc.v2 Dec 13 14:24:59.252742 env[1211]: time="2024-12-13T14:24:59.252651025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:24:59.252742 env[1211]: time="2024-12-13T14:24:59.252685656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:24:59.252742 env[1211]: time="2024-12-13T14:24:59.252699992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:24:59.253149 env[1211]: time="2024-12-13T14:24:59.253088187Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/84daa0fe66b55f7d418f76fb714f80c25b96ba07e212f157e6d09bfc2c4513a8 pid=1492 runtime=io.containerd.runc.v2 Dec 13 14:24:59.268545 systemd[1]: Started cri-containerd-84daa0fe66b55f7d418f76fb714f80c25b96ba07e212f157e6d09bfc2c4513a8.scope. Dec 13 14:24:59.276090 systemd[1]: Started cri-containerd-da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5.scope. Dec 13 14:24:59.339930 kubelet[1419]: E1213 14:24:59.339870 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:59.405372 env[1211]: time="2024-12-13T14:24:59.405288687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dfqp5,Uid:56ca637a-6110-49ff-90b0-6cddf7e7fb82,Namespace:kube-system,Attempt:0,} returns sandbox id \"da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5\"" Dec 13 14:24:59.407006 kubelet[1419]: E1213 14:24:59.406946 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:24:59.409097 env[1211]: time="2024-12-13T14:24:59.409043239Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:24:59.414248 env[1211]: time="2024-12-13T14:24:59.414184288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wdspv,Uid:35130931-71d2-488a-84f8-2826aa8541bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"84daa0fe66b55f7d418f76fb714f80c25b96ba07e212f157e6d09bfc2c4513a8\"" Dec 13 14:24:59.414983 kubelet[1419]: E1213 14:24:59.414954 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:00.340550 kubelet[1419]: E1213 14:25:00.340490 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:01.341356 kubelet[1419]: E1213 14:25:01.341275 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:02.342398 kubelet[1419]: E1213 14:25:02.342314 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:03.343476 kubelet[1419]: E1213 14:25:03.343389 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:04.344559 kubelet[1419]: E1213 14:25:04.344481 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:05.345578 kubelet[1419]: E1213 14:25:05.345498 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:06.346504 kubelet[1419]: E1213 14:25:06.346398 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:07.347393 kubelet[1419]: E1213 14:25:07.347348 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:08.348067 kubelet[1419]: E1213 14:25:08.348000 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:09.349111 kubelet[1419]: E1213 14:25:09.349019 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:09.377508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3498470521.mount: Deactivated successfully. Dec 13 14:25:10.349603 kubelet[1419]: E1213 14:25:10.349542 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:11.349847 kubelet[1419]: E1213 14:25:11.349777 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:12.350910 kubelet[1419]: E1213 14:25:12.350854 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:13.351472 kubelet[1419]: E1213 14:25:13.351406 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:13.463538 env[1211]: time="2024-12-13T14:25:13.463467585Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:13.510345 env[1211]: time="2024-12-13T14:25:13.510280477Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:13.550996 env[1211]: time="2024-12-13T14:25:13.550926153Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:13.551674 env[1211]: time="2024-12-13T14:25:13.551640058Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:25:13.552928 env[1211]: time="2024-12-13T14:25:13.552899950Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 14:25:13.554077 env[1211]: time="2024-12-13T14:25:13.554038942Z" level=info msg="CreateContainer within sandbox \"da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:25:13.917942 env[1211]: time="2024-12-13T14:25:13.917851191Z" level=info msg="CreateContainer within sandbox \"da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7b344b7dfe2f3820e759d82369ce0ff7e05c98d0a3d082414e09970e1b1505fb\"" Dec 13 14:25:13.918619 env[1211]: time="2024-12-13T14:25:13.918587629Z" level=info msg="StartContainer for \"7b344b7dfe2f3820e759d82369ce0ff7e05c98d0a3d082414e09970e1b1505fb\"" Dec 13 14:25:13.944522 systemd[1]: Started cri-containerd-7b344b7dfe2f3820e759d82369ce0ff7e05c98d0a3d082414e09970e1b1505fb.scope. Dec 13 14:25:13.985678 systemd[1]: cri-containerd-7b344b7dfe2f3820e759d82369ce0ff7e05c98d0a3d082414e09970e1b1505fb.scope: Deactivated successfully. Dec 13 14:25:14.247774 env[1211]: time="2024-12-13T14:25:14.247555788Z" level=info msg="StartContainer for \"7b344b7dfe2f3820e759d82369ce0ff7e05c98d0a3d082414e09970e1b1505fb\" returns successfully" Dec 13 14:25:14.351699 kubelet[1419]: E1213 14:25:14.351608 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:14.498153 kubelet[1419]: E1213 14:25:14.498039 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:14.790941 env[1211]: time="2024-12-13T14:25:14.790786871Z" level=info msg="shim disconnected" id=7b344b7dfe2f3820e759d82369ce0ff7e05c98d0a3d082414e09970e1b1505fb Dec 13 14:25:14.791437 env[1211]: time="2024-12-13T14:25:14.791385658Z" level=warning msg="cleaning up after shim disconnected" id=7b344b7dfe2f3820e759d82369ce0ff7e05c98d0a3d082414e09970e1b1505fb namespace=k8s.io Dec 13 14:25:14.791437 env[1211]: time="2024-12-13T14:25:14.791409705Z" level=info msg="cleaning up dead shim" Dec 13 14:25:14.800839 env[1211]: time="2024-12-13T14:25:14.800770660Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1596 runtime=io.containerd.runc.v2\n" Dec 13 14:25:14.809415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b344b7dfe2f3820e759d82369ce0ff7e05c98d0a3d082414e09970e1b1505fb-rootfs.mount: Deactivated successfully. Dec 13 14:25:15.338801 kubelet[1419]: E1213 14:25:15.338700 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:15.352081 kubelet[1419]: E1213 14:25:15.352034 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:15.501342 kubelet[1419]: E1213 14:25:15.501304 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:15.503017 env[1211]: time="2024-12-13T14:25:15.502947070Z" level=info msg="CreateContainer within sandbox \"da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:25:15.546105 env[1211]: time="2024-12-13T14:25:15.546013719Z" level=info msg="CreateContainer within sandbox \"da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3d1f263590c9e05ecf0e40fc6af4baae3befd5874125ea0f55eafce4d60b9e00\"" Dec 13 14:25:15.546891 env[1211]: time="2024-12-13T14:25:15.546821197Z" level=info msg="StartContainer for \"3d1f263590c9e05ecf0e40fc6af4baae3befd5874125ea0f55eafce4d60b9e00\"" Dec 13 14:25:15.572286 systemd[1]: Started cri-containerd-3d1f263590c9e05ecf0e40fc6af4baae3befd5874125ea0f55eafce4d60b9e00.scope. Dec 13 14:25:15.721426 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:25:15.721649 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:25:15.721867 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:25:15.723650 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:25:15.724087 systemd[1]: cri-containerd-3d1f263590c9e05ecf0e40fc6af4baae3befd5874125ea0f55eafce4d60b9e00.scope: Deactivated successfully. Dec 13 14:25:15.733311 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:25:15.786362 env[1211]: time="2024-12-13T14:25:15.786271076Z" level=info msg="StartContainer for \"3d1f263590c9e05ecf0e40fc6af4baae3befd5874125ea0f55eafce4d60b9e00\" returns successfully" Dec 13 14:25:15.863337 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d1f263590c9e05ecf0e40fc6af4baae3befd5874125ea0f55eafce4d60b9e00-rootfs.mount: Deactivated successfully. Dec 13 14:25:16.082783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3608304262.mount: Deactivated successfully. Dec 13 14:25:16.323348 env[1211]: time="2024-12-13T14:25:16.323289181Z" level=info msg="shim disconnected" id=3d1f263590c9e05ecf0e40fc6af4baae3befd5874125ea0f55eafce4d60b9e00 Dec 13 14:25:16.323348 env[1211]: time="2024-12-13T14:25:16.323338214Z" level=warning msg="cleaning up after shim disconnected" id=3d1f263590c9e05ecf0e40fc6af4baae3befd5874125ea0f55eafce4d60b9e00 namespace=k8s.io Dec 13 14:25:16.323348 env[1211]: time="2024-12-13T14:25:16.323349139Z" level=info msg="cleaning up dead shim" Dec 13 14:25:16.330150 env[1211]: time="2024-12-13T14:25:16.330126766Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1661 runtime=io.containerd.runc.v2\n" Dec 13 14:25:16.352858 kubelet[1419]: E1213 14:25:16.352661 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:16.504793 kubelet[1419]: E1213 14:25:16.504710 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:16.506633 env[1211]: time="2024-12-13T14:25:16.506572119Z" level=info msg="CreateContainer within sandbox \"da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:25:16.767908 env[1211]: time="2024-12-13T14:25:16.767507001Z" level=info msg="CreateContainer within sandbox \"da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dd381fb3658e729eeffc24e50983764cdd614e659898a7011661becc1a371e8b\"" Dec 13 14:25:16.768195 env[1211]: time="2024-12-13T14:25:16.768143739Z" level=info msg="StartContainer for \"dd381fb3658e729eeffc24e50983764cdd614e659898a7011661becc1a371e8b\"" Dec 13 14:25:16.791926 systemd[1]: Started cri-containerd-dd381fb3658e729eeffc24e50983764cdd614e659898a7011661becc1a371e8b.scope. Dec 13 14:25:16.854778 systemd[1]: cri-containerd-dd381fb3658e729eeffc24e50983764cdd614e659898a7011661becc1a371e8b.scope: Deactivated successfully. Dec 13 14:25:16.855847 env[1211]: time="2024-12-13T14:25:16.855791065Z" level=info msg="StartContainer for \"dd381fb3658e729eeffc24e50983764cdd614e659898a7011661becc1a371e8b\" returns successfully" Dec 13 14:25:16.878319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd381fb3658e729eeffc24e50983764cdd614e659898a7011661becc1a371e8b-rootfs.mount: Deactivated successfully. Dec 13 14:25:17.353292 kubelet[1419]: E1213 14:25:17.353221 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:17.471806 env[1211]: time="2024-12-13T14:25:17.471716701Z" level=info msg="shim disconnected" id=dd381fb3658e729eeffc24e50983764cdd614e659898a7011661becc1a371e8b Dec 13 14:25:17.471806 env[1211]: time="2024-12-13T14:25:17.471795292Z" level=warning msg="cleaning up after shim disconnected" id=dd381fb3658e729eeffc24e50983764cdd614e659898a7011661becc1a371e8b namespace=k8s.io Dec 13 14:25:17.471806 env[1211]: time="2024-12-13T14:25:17.471805816Z" level=info msg="cleaning up dead shim" Dec 13 14:25:17.485053 env[1211]: time="2024-12-13T14:25:17.484989711Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1719 runtime=io.containerd.runc.v2\n" Dec 13 14:25:17.611777 kubelet[1419]: E1213 14:25:17.611268 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:17.612924 env[1211]: time="2024-12-13T14:25:17.612878879Z" level=info msg="CreateContainer within sandbox \"da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:25:17.955180 env[1211]: time="2024-12-13T14:25:17.954987194Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:18.076433 env[1211]: time="2024-12-13T14:25:18.076349288Z" level=info msg="CreateContainer within sandbox \"da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b\"" Dec 13 14:25:18.077048 env[1211]: time="2024-12-13T14:25:18.077001777Z" level=info msg="StartContainer for \"df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b\"" Dec 13 14:25:18.086228 env[1211]: time="2024-12-13T14:25:18.086181913Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:18.094802 env[1211]: time="2024-12-13T14:25:18.094743900Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:18.101671 systemd[1]: Started cri-containerd-df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b.scope. Dec 13 14:25:18.133114 systemd[1]: cri-containerd-df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b.scope: Deactivated successfully. Dec 13 14:25:18.135069 env[1211]: time="2024-12-13T14:25:18.135019515Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:18.135649 env[1211]: time="2024-12-13T14:25:18.135622832Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 14:25:18.137643 env[1211]: time="2024-12-13T14:25:18.137608885Z" level=info msg="CreateContainer within sandbox \"84daa0fe66b55f7d418f76fb714f80c25b96ba07e212f157e6d09bfc2c4513a8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:25:18.154091 env[1211]: time="2024-12-13T14:25:18.154043885Z" level=info msg="StartContainer for \"df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b\" returns successfully" Dec 13 14:25:18.354207 kubelet[1419]: E1213 14:25:18.354011 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:18.491763 env[1211]: time="2024-12-13T14:25:18.491675304Z" level=info msg="shim disconnected" id=df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b Dec 13 14:25:18.492249 env[1211]: time="2024-12-13T14:25:18.491865856Z" level=warning msg="cleaning up after shim disconnected" id=df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b namespace=k8s.io Dec 13 14:25:18.492249 env[1211]: time="2024-12-13T14:25:18.491881983Z" level=info msg="cleaning up dead shim" Dec 13 14:25:18.509570 env[1211]: time="2024-12-13T14:25:18.509480019Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1774 runtime=io.containerd.runc.v2\n" Dec 13 14:25:18.516458 env[1211]: time="2024-12-13T14:25:18.516373835Z" level=info msg="CreateContainer within sandbox \"84daa0fe66b55f7d418f76fb714f80c25b96ba07e212f157e6d09bfc2c4513a8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3b225e9e73e7d3d895773abb95444d84c447b4770795b52327e04d1a0a2f857d\"" Dec 13 14:25:18.517267 env[1211]: time="2024-12-13T14:25:18.517083293Z" level=info msg="StartContainer for \"3b225e9e73e7d3d895773abb95444d84c447b4770795b52327e04d1a0a2f857d\"" Dec 13 14:25:18.534699 systemd[1]: Started cri-containerd-3b225e9e73e7d3d895773abb95444d84c447b4770795b52327e04d1a0a2f857d.scope. Dec 13 14:25:18.722279 env[1211]: time="2024-12-13T14:25:18.722061822Z" level=info msg="StartContainer for \"3b225e9e73e7d3d895773abb95444d84c447b4770795b52327e04d1a0a2f857d\" returns successfully" Dec 13 14:25:18.726790 kubelet[1419]: E1213 14:25:18.726764 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:18.728308 env[1211]: time="2024-12-13T14:25:18.728262536Z" level=info msg="CreateContainer within sandbox \"da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:25:18.832672 systemd[1]: run-containerd-runc-k8s.io-df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b-runc.9R86Jr.mount: Deactivated successfully. Dec 13 14:25:18.832810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b-rootfs.mount: Deactivated successfully. Dec 13 14:25:19.196305 env[1211]: time="2024-12-13T14:25:19.196106381Z" level=info msg="CreateContainer within sandbox \"da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9\"" Dec 13 14:25:19.196836 env[1211]: time="2024-12-13T14:25:19.196804877Z" level=info msg="StartContainer for \"226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9\"" Dec 13 14:25:19.218357 systemd[1]: Started cri-containerd-226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9.scope. Dec 13 14:25:19.334708 env[1211]: time="2024-12-13T14:25:19.334599094Z" level=info msg="StartContainer for \"226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9\" returns successfully" Dec 13 14:25:19.354341 kubelet[1419]: E1213 14:25:19.354288 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:19.404888 kubelet[1419]: I1213 14:25:19.403871 1419 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 14:25:19.666775 kernel: Initializing XFRM netlink socket Dec 13 14:25:19.731294 kubelet[1419]: E1213 14:25:19.731242 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:19.731294 kubelet[1419]: E1213 14:25:19.731245 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:19.831861 systemd[1]: run-containerd-runc-k8s.io-226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9-runc.GOKa41.mount: Deactivated successfully. Dec 13 14:25:19.898780 kubelet[1419]: I1213 14:25:19.898682 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wdspv" podStartSLOduration=6.177628553 podStartE2EDuration="24.898664347s" podCreationTimestamp="2024-12-13 14:24:55 +0000 UTC" firstStartedPulling="2024-12-13 14:24:59.415510722 +0000 UTC m=+4.498004807" lastFinishedPulling="2024-12-13 14:25:18.136546516 +0000 UTC m=+23.219040601" observedRunningTime="2024-12-13 14:25:19.857138844 +0000 UTC m=+24.939632939" watchObservedRunningTime="2024-12-13 14:25:19.898664347 +0000 UTC m=+24.981158432" Dec 13 14:25:20.354522 kubelet[1419]: E1213 14:25:20.354409 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:20.733377 kubelet[1419]: E1213 14:25:20.733241 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:20.733866 kubelet[1419]: E1213 14:25:20.733825 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:21.352429 systemd-networkd[1039]: cilium_host: Link UP Dec 13 14:25:21.352563 systemd-networkd[1039]: cilium_net: Link UP Dec 13 14:25:21.355961 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:25:21.356014 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:25:21.356040 kubelet[1419]: E1213 14:25:21.355843 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:21.356411 systemd-networkd[1039]: cilium_net: Gained carrier Dec 13 14:25:21.356608 systemd-networkd[1039]: cilium_host: Gained carrier Dec 13 14:25:21.369219 systemd-networkd[1039]: cilium_net: Gained IPv6LL Dec 13 14:25:21.450998 systemd-networkd[1039]: cilium_vxlan: Link UP Dec 13 14:25:21.451009 systemd-networkd[1039]: cilium_vxlan: Gained carrier Dec 13 14:25:21.697764 kernel: NET: Registered PF_ALG protocol family Dec 13 14:25:21.734335 kubelet[1419]: E1213 14:25:21.734303 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:22.274337 kubelet[1419]: I1213 14:25:22.274253 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dfqp5" podStartSLOduration=13.129977482 podStartE2EDuration="27.27422827s" podCreationTimestamp="2024-12-13 14:24:55 +0000 UTC" firstStartedPulling="2024-12-13 14:24:59.408483403 +0000 UTC m=+4.490977488" lastFinishedPulling="2024-12-13 14:25:13.552734171 +0000 UTC m=+18.635228276" observedRunningTime="2024-12-13 14:25:19.90111026 +0000 UTC m=+24.983604365" watchObservedRunningTime="2024-12-13 14:25:22.27422827 +0000 UTC m=+27.356722355" Dec 13 14:25:22.280470 systemd[1]: Created slice kubepods-besteffort-podfa59a637_f1d2_4d6c_95af_6aeb7ddaa480.slice. Dec 13 14:25:22.332888 systemd-networkd[1039]: cilium_host: Gained IPv6LL Dec 13 14:25:22.356924 kubelet[1419]: E1213 14:25:22.356891 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:22.372098 kubelet[1419]: I1213 14:25:22.372059 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pbjw\" (UniqueName: \"kubernetes.io/projected/fa59a637-f1d2-4d6c-95af-6aeb7ddaa480-kube-api-access-7pbjw\") pod \"nginx-deployment-8587fbcb89-lx5jn\" (UID: \"fa59a637-f1d2-4d6c-95af-6aeb7ddaa480\") " pod="default/nginx-deployment-8587fbcb89-lx5jn" Dec 13 14:25:22.402462 systemd-networkd[1039]: lxc_health: Link UP Dec 13 14:25:22.410751 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:25:22.410483 systemd-networkd[1039]: lxc_health: Gained carrier Dec 13 14:25:22.584189 env[1211]: time="2024-12-13T14:25:22.584062450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-lx5jn,Uid:fa59a637-f1d2-4d6c-95af-6aeb7ddaa480,Namespace:default,Attempt:0,}" Dec 13 14:25:22.621436 systemd-networkd[1039]: lxcfc72aa8cdf82: Link UP Dec 13 14:25:22.633872 kernel: eth0: renamed from tmp858f3 Dec 13 14:25:22.640116 systemd-networkd[1039]: lxcfc72aa8cdf82: Gained carrier Dec 13 14:25:22.640821 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcfc72aa8cdf82: link becomes ready Dec 13 14:25:22.908359 systemd-networkd[1039]: cilium_vxlan: Gained IPv6LL Dec 13 14:25:23.357259 kubelet[1419]: E1213 14:25:23.357128 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:23.682819 kubelet[1419]: E1213 14:25:23.682665 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:23.742457 systemd-networkd[1039]: lxc_health: Gained IPv6LL Dec 13 14:25:23.932205 systemd-networkd[1039]: lxcfc72aa8cdf82: Gained IPv6LL Dec 13 14:25:24.357363 kubelet[1419]: E1213 14:25:24.357260 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:24.624238 kubelet[1419]: I1213 14:25:24.624071 1419 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:25:24.624646 kubelet[1419]: E1213 14:25:24.624535 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:24.739034 kubelet[1419]: E1213 14:25:24.738986 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:25.358305 kubelet[1419]: E1213 14:25:25.358248 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:26.184503 env[1211]: time="2024-12-13T14:25:26.184415391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:25:26.184503 env[1211]: time="2024-12-13T14:25:26.184471028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:25:26.184503 env[1211]: time="2024-12-13T14:25:26.184484517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:25:26.184917 env[1211]: time="2024-12-13T14:25:26.184671441Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/858f3a277271e93975f1a8de00ba47371f7d1cae3c85a52cd154575a790c5271 pid=2489 runtime=io.containerd.runc.v2 Dec 13 14:25:26.199305 systemd[1]: Started cri-containerd-858f3a277271e93975f1a8de00ba47371f7d1cae3c85a52cd154575a790c5271.scope. Dec 13 14:25:26.210424 systemd-resolved[1152]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:25:26.232900 env[1211]: time="2024-12-13T14:25:26.232847004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-lx5jn,Uid:fa59a637-f1d2-4d6c-95af-6aeb7ddaa480,Namespace:default,Attempt:0,} returns sandbox id \"858f3a277271e93975f1a8de00ba47371f7d1cae3c85a52cd154575a790c5271\"" Dec 13 14:25:26.234121 env[1211]: time="2024-12-13T14:25:26.234087180Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:25:26.359218 kubelet[1419]: E1213 14:25:26.359161 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:27.360311 kubelet[1419]: E1213 14:25:27.360237 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:28.360982 kubelet[1419]: E1213 14:25:28.360877 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:29.361117 kubelet[1419]: E1213 14:25:29.361048 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:29.498322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2619835269.mount: Deactivated successfully. Dec 13 14:25:30.361256 kubelet[1419]: E1213 14:25:30.361190 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:31.362012 kubelet[1419]: E1213 14:25:31.361923 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:32.362377 kubelet[1419]: E1213 14:25:32.362299 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:32.599192 env[1211]: time="2024-12-13T14:25:32.599106276Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:32.601786 env[1211]: time="2024-12-13T14:25:32.601739912Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:32.603653 env[1211]: time="2024-12-13T14:25:32.603605927Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:32.606010 env[1211]: time="2024-12-13T14:25:32.605963162Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:32.606623 env[1211]: time="2024-12-13T14:25:32.606589776Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:25:32.608578 env[1211]: time="2024-12-13T14:25:32.608540113Z" level=info msg="CreateContainer within sandbox \"858f3a277271e93975f1a8de00ba47371f7d1cae3c85a52cd154575a790c5271\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:25:32.815361 env[1211]: time="2024-12-13T14:25:32.815245283Z" level=info msg="CreateContainer within sandbox \"858f3a277271e93975f1a8de00ba47371f7d1cae3c85a52cd154575a790c5271\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"c685feac490481b08b49bf12f8fb2e765ffb8c621433d159e9a435e20e1f3d1d\"" Dec 13 14:25:32.815977 env[1211]: time="2024-12-13T14:25:32.815931178Z" level=info msg="StartContainer for \"c685feac490481b08b49bf12f8fb2e765ffb8c621433d159e9a435e20e1f3d1d\"" Dec 13 14:25:32.834232 systemd[1]: Started cri-containerd-c685feac490481b08b49bf12f8fb2e765ffb8c621433d159e9a435e20e1f3d1d.scope. Dec 13 14:25:32.935927 env[1211]: time="2024-12-13T14:25:32.935840850Z" level=info msg="StartContainer for \"c685feac490481b08b49bf12f8fb2e765ffb8c621433d159e9a435e20e1f3d1d\" returns successfully" Dec 13 14:25:33.363410 kubelet[1419]: E1213 14:25:33.363299 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:34.364272 kubelet[1419]: E1213 14:25:34.364196 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:34.873040 update_engine[1203]: I1213 14:25:34.872941 1203 update_attempter.cc:509] Updating boot flags... Dec 13 14:25:35.338890 kubelet[1419]: E1213 14:25:35.338845 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:35.364780 kubelet[1419]: E1213 14:25:35.364688 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:36.365175 kubelet[1419]: E1213 14:25:36.365099 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:37.365767 kubelet[1419]: E1213 14:25:37.365662 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:38.365970 kubelet[1419]: E1213 14:25:38.365919 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:39.367080 kubelet[1419]: E1213 14:25:39.366996 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:40.190083 kubelet[1419]: I1213 14:25:40.189978 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-lx5jn" podStartSLOduration=11.8161409 podStartE2EDuration="18.189959398s" podCreationTimestamp="2024-12-13 14:25:22 +0000 UTC" firstStartedPulling="2024-12-13 14:25:26.23375559 +0000 UTC m=+31.316249675" lastFinishedPulling="2024-12-13 14:25:32.607574087 +0000 UTC m=+37.690068173" observedRunningTime="2024-12-13 14:25:33.770534514 +0000 UTC m=+38.853028599" watchObservedRunningTime="2024-12-13 14:25:40.189959398 +0000 UTC m=+45.272453484" Dec 13 14:25:40.195807 systemd[1]: Created slice kubepods-besteffort-pod935615cd_ede6_4e0f_beea_8a8925ddfccb.slice. Dec 13 14:25:40.362813 kubelet[1419]: I1213 14:25:40.362699 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/935615cd-ede6-4e0f-beea-8a8925ddfccb-data\") pod \"nfs-server-provisioner-0\" (UID: \"935615cd-ede6-4e0f-beea-8a8925ddfccb\") " pod="default/nfs-server-provisioner-0" Dec 13 14:25:40.362813 kubelet[1419]: I1213 14:25:40.362790 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdf24\" (UniqueName: \"kubernetes.io/projected/935615cd-ede6-4e0f-beea-8a8925ddfccb-kube-api-access-vdf24\") pod \"nfs-server-provisioner-0\" (UID: \"935615cd-ede6-4e0f-beea-8a8925ddfccb\") " pod="default/nfs-server-provisioner-0" Dec 13 14:25:40.368005 kubelet[1419]: E1213 14:25:40.367915 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:40.498505 env[1211]: time="2024-12-13T14:25:40.498361015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:935615cd-ede6-4e0f-beea-8a8925ddfccb,Namespace:default,Attempt:0,}" Dec 13 14:25:40.532754 systemd-networkd[1039]: lxca7dec206856a: Link UP Dec 13 14:25:40.537762 kernel: eth0: renamed from tmpec009 Dec 13 14:25:40.547387 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:25:40.547454 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca7dec206856a: link becomes ready Dec 13 14:25:40.547672 systemd-networkd[1039]: lxca7dec206856a: Gained carrier Dec 13 14:25:40.763226 env[1211]: time="2024-12-13T14:25:40.763055119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:25:40.763226 env[1211]: time="2024-12-13T14:25:40.763096931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:25:40.763226 env[1211]: time="2024-12-13T14:25:40.763108384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:25:40.763460 env[1211]: time="2024-12-13T14:25:40.763387112Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec009b077f258b0931a9b43e1f3ca69a387fd1a7d14a269ec1e0f3703235a008 pid=2635 runtime=io.containerd.runc.v2 Dec 13 14:25:40.778106 systemd[1]: Started cri-containerd-ec009b077f258b0931a9b43e1f3ca69a387fd1a7d14a269ec1e0f3703235a008.scope. Dec 13 14:25:40.798402 systemd-resolved[1152]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:25:40.828567 env[1211]: time="2024-12-13T14:25:40.828494851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:935615cd-ede6-4e0f-beea-8a8925ddfccb,Namespace:default,Attempt:0,} returns sandbox id \"ec009b077f258b0931a9b43e1f3ca69a387fd1a7d14a269ec1e0f3703235a008\"" Dec 13 14:25:40.830253 env[1211]: time="2024-12-13T14:25:40.830194480Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:25:41.368446 kubelet[1419]: E1213 14:25:41.368372 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:42.174615 systemd-networkd[1039]: lxca7dec206856a: Gained IPv6LL Dec 13 14:25:42.369079 kubelet[1419]: E1213 14:25:42.369024 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:43.369321 kubelet[1419]: E1213 14:25:43.369223 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:43.691782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2371336054.mount: Deactivated successfully. Dec 13 14:25:44.370214 kubelet[1419]: E1213 14:25:44.370130 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:45.371163 kubelet[1419]: E1213 14:25:45.371080 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:45.965086 env[1211]: time="2024-12-13T14:25:45.964986420Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:45.967336 env[1211]: time="2024-12-13T14:25:45.967293355Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:45.969285 env[1211]: time="2024-12-13T14:25:45.969234748Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:45.971305 env[1211]: time="2024-12-13T14:25:45.971262480Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:45.972350 env[1211]: time="2024-12-13T14:25:45.972280048Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 14:25:45.974706 env[1211]: time="2024-12-13T14:25:45.974657028Z" level=info msg="CreateContainer within sandbox \"ec009b077f258b0931a9b43e1f3ca69a387fd1a7d14a269ec1e0f3703235a008\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:25:45.989114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651350920.mount: Deactivated successfully. Dec 13 14:25:45.990086 env[1211]: time="2024-12-13T14:25:45.990034419Z" level=info msg="CreateContainer within sandbox \"ec009b077f258b0931a9b43e1f3ca69a387fd1a7d14a269ec1e0f3703235a008\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"24915694856669aae1ad676a128440f9be758bf9120b6ccf6d9fe1d3580c7aaa\"" Dec 13 14:25:45.990590 env[1211]: time="2024-12-13T14:25:45.990545333Z" level=info msg="StartContainer for \"24915694856669aae1ad676a128440f9be758bf9120b6ccf6d9fe1d3580c7aaa\"" Dec 13 14:25:46.010834 systemd[1]: run-containerd-runc-k8s.io-24915694856669aae1ad676a128440f9be758bf9120b6ccf6d9fe1d3580c7aaa-runc.noC3w7.mount: Deactivated successfully. Dec 13 14:25:46.012365 systemd[1]: Started cri-containerd-24915694856669aae1ad676a128440f9be758bf9120b6ccf6d9fe1d3580c7aaa.scope. Dec 13 14:25:46.216169 env[1211]: time="2024-12-13T14:25:46.216015725Z" level=info msg="StartContainer for \"24915694856669aae1ad676a128440f9be758bf9120b6ccf6d9fe1d3580c7aaa\" returns successfully" Dec 13 14:25:46.371855 kubelet[1419]: E1213 14:25:46.371758 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:46.797090 kubelet[1419]: I1213 14:25:46.797014 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.6536794750000001 podStartE2EDuration="6.796992366s" podCreationTimestamp="2024-12-13 14:25:40 +0000 UTC" firstStartedPulling="2024-12-13 14:25:40.829927564 +0000 UTC m=+45.912421649" lastFinishedPulling="2024-12-13 14:25:45.973240455 +0000 UTC m=+51.055734540" observedRunningTime="2024-12-13 14:25:46.796268412 +0000 UTC m=+51.878762497" watchObservedRunningTime="2024-12-13 14:25:46.796992366 +0000 UTC m=+51.879486451" Dec 13 14:25:47.372522 kubelet[1419]: E1213 14:25:47.372393 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:48.373007 kubelet[1419]: E1213 14:25:48.372918 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:49.374148 kubelet[1419]: E1213 14:25:49.374073 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:50.374270 kubelet[1419]: E1213 14:25:50.374221 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:51.374857 kubelet[1419]: E1213 14:25:51.374804 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:52.375346 kubelet[1419]: E1213 14:25:52.375307 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:53.375855 kubelet[1419]: E1213 14:25:53.375763 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:54.376679 kubelet[1419]: E1213 14:25:54.376591 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:55.338661 kubelet[1419]: E1213 14:25:55.338603 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:55.377668 kubelet[1419]: E1213 14:25:55.377604 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:55.709213 systemd[1]: Created slice kubepods-besteffort-poddd732c33_addf_46fc_a92a_226d7aaa119f.slice. Dec 13 14:25:55.837135 kubelet[1419]: I1213 14:25:55.837056 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spgd6\" (UniqueName: \"kubernetes.io/projected/dd732c33-addf-46fc-a92a-226d7aaa119f-kube-api-access-spgd6\") pod \"test-pod-1\" (UID: \"dd732c33-addf-46fc-a92a-226d7aaa119f\") " pod="default/test-pod-1" Dec 13 14:25:55.837135 kubelet[1419]: I1213 14:25:55.837117 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9c5562df-61ed-4b9f-9c84-18db3c7e45f3\" (UniqueName: \"kubernetes.io/nfs/dd732c33-addf-46fc-a92a-226d7aaa119f-pvc-9c5562df-61ed-4b9f-9c84-18db3c7e45f3\") pod \"test-pod-1\" (UID: \"dd732c33-addf-46fc-a92a-226d7aaa119f\") " pod="default/test-pod-1" Dec 13 14:25:55.962762 kernel: FS-Cache: Loaded Dec 13 14:25:56.013194 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:25:56.013370 kernel: RPC: Registered udp transport module. Dec 13 14:25:56.013397 kernel: RPC: Registered tcp transport module. Dec 13 14:25:56.014084 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:25:56.078761 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:25:56.277357 kernel: NFS: Registering the id_resolver key type Dec 13 14:25:56.277560 kernel: Key type id_resolver registered Dec 13 14:25:56.277596 kernel: Key type id_legacy registered Dec 13 14:25:56.304508 nfsidmap[2756]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 14:25:56.308084 nfsidmap[2759]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 14:25:56.378834 kubelet[1419]: E1213 14:25:56.378774 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:56.611994 env[1211]: time="2024-12-13T14:25:56.611847519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:dd732c33-addf-46fc-a92a-226d7aaa119f,Namespace:default,Attempt:0,}" Dec 13 14:25:56.644076 systemd-networkd[1039]: lxc381fbedead17: Link UP Dec 13 14:25:56.650761 kernel: eth0: renamed from tmp93162 Dec 13 14:25:56.664669 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:25:56.664814 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc381fbedead17: link becomes ready Dec 13 14:25:56.665539 systemd-networkd[1039]: lxc381fbedead17: Gained carrier Dec 13 14:25:56.855455 env[1211]: time="2024-12-13T14:25:56.855327717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:25:56.855455 env[1211]: time="2024-12-13T14:25:56.855395950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:25:56.855455 env[1211]: time="2024-12-13T14:25:56.855410157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:25:56.855772 env[1211]: time="2024-12-13T14:25:56.855629644Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9316212af3e35ca7c5d93fe90cbfedb28867aff48c84c49f84acf325966d836c pid=2793 runtime=io.containerd.runc.v2 Dec 13 14:25:56.868449 systemd[1]: Started cri-containerd-9316212af3e35ca7c5d93fe90cbfedb28867aff48c84c49f84acf325966d836c.scope. Dec 13 14:25:56.884508 systemd-resolved[1152]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:25:56.907372 env[1211]: time="2024-12-13T14:25:56.907315029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:dd732c33-addf-46fc-a92a-226d7aaa119f,Namespace:default,Attempt:0,} returns sandbox id \"9316212af3e35ca7c5d93fe90cbfedb28867aff48c84c49f84acf325966d836c\"" Dec 13 14:25:56.908945 env[1211]: time="2024-12-13T14:25:56.908920951Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:25:57.301244 env[1211]: time="2024-12-13T14:25:57.301152011Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:57.303310 env[1211]: time="2024-12-13T14:25:57.303245257Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:57.305033 env[1211]: time="2024-12-13T14:25:57.304992881Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:57.306536 env[1211]: time="2024-12-13T14:25:57.306479901Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:57.307369 env[1211]: time="2024-12-13T14:25:57.307336875Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:25:57.309780 env[1211]: time="2024-12-13T14:25:57.309749342Z" level=info msg="CreateContainer within sandbox \"9316212af3e35ca7c5d93fe90cbfedb28867aff48c84c49f84acf325966d836c\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:25:57.325613 env[1211]: time="2024-12-13T14:25:57.325556339Z" level=info msg="CreateContainer within sandbox \"9316212af3e35ca7c5d93fe90cbfedb28867aff48c84c49f84acf325966d836c\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"1c35bdb26aa4f3b7e2d0d5a1948f48c957529c1924c90edd285822e526ca1f5f\"" Dec 13 14:25:57.326140 env[1211]: time="2024-12-13T14:25:57.326114222Z" level=info msg="StartContainer for \"1c35bdb26aa4f3b7e2d0d5a1948f48c957529c1924c90edd285822e526ca1f5f\"" Dec 13 14:25:57.343787 systemd[1]: Started cri-containerd-1c35bdb26aa4f3b7e2d0d5a1948f48c957529c1924c90edd285822e526ca1f5f.scope. Dec 13 14:25:57.370881 env[1211]: time="2024-12-13T14:25:57.370827735Z" level=info msg="StartContainer for \"1c35bdb26aa4f3b7e2d0d5a1948f48c957529c1924c90edd285822e526ca1f5f\" returns successfully" Dec 13 14:25:57.379753 kubelet[1419]: E1213 14:25:57.379691 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:57.817826 kubelet[1419]: I1213 14:25:57.817756 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.417822992 podStartE2EDuration="17.817706508s" podCreationTimestamp="2024-12-13 14:25:40 +0000 UTC" firstStartedPulling="2024-12-13 14:25:56.908518849 +0000 UTC m=+61.991012934" lastFinishedPulling="2024-12-13 14:25:57.308402365 +0000 UTC m=+62.390896450" observedRunningTime="2024-12-13 14:25:57.817691499 +0000 UTC m=+62.900185584" watchObservedRunningTime="2024-12-13 14:25:57.817706508 +0000 UTC m=+62.900200593" Dec 13 14:25:57.979980 systemd-networkd[1039]: lxc381fbedead17: Gained IPv6LL Dec 13 14:25:58.380183 kubelet[1419]: E1213 14:25:58.380121 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:59.380410 kubelet[1419]: E1213 14:25:59.380327 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:00.381390 kubelet[1419]: E1213 14:26:00.381300 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:01.382110 kubelet[1419]: E1213 14:26:01.382030 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:02.383033 kubelet[1419]: E1213 14:26:02.382956 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:03.353825 env[1211]: time="2024-12-13T14:26:03.353710396Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:26:03.359390 env[1211]: time="2024-12-13T14:26:03.359339133Z" level=info msg="StopContainer for \"226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9\" with timeout 2 (s)" Dec 13 14:26:03.359637 env[1211]: time="2024-12-13T14:26:03.359609495Z" level=info msg="Stop container \"226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9\" with signal terminated" Dec 13 14:26:03.366355 systemd-networkd[1039]: lxc_health: Link DOWN Dec 13 14:26:03.366363 systemd-networkd[1039]: lxc_health: Lost carrier Dec 13 14:26:03.383505 kubelet[1419]: E1213 14:26:03.383477 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:03.400248 systemd[1]: cri-containerd-226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9.scope: Deactivated successfully. Dec 13 14:26:03.400655 systemd[1]: cri-containerd-226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9.scope: Consumed 7.071s CPU time. Dec 13 14:26:03.418050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9-rootfs.mount: Deactivated successfully. Dec 13 14:26:03.535472 env[1211]: time="2024-12-13T14:26:03.535413545Z" level=info msg="shim disconnected" id=226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9 Dec 13 14:26:03.535472 env[1211]: time="2024-12-13T14:26:03.535462569Z" level=warning msg="cleaning up after shim disconnected" id=226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9 namespace=k8s.io Dec 13 14:26:03.535472 env[1211]: time="2024-12-13T14:26:03.535471217Z" level=info msg="cleaning up dead shim" Dec 13 14:26:03.542121 env[1211]: time="2024-12-13T14:26:03.542049949Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2925 runtime=io.containerd.runc.v2\n" Dec 13 14:26:03.600818 env[1211]: time="2024-12-13T14:26:03.600748577Z" level=info msg="StopContainer for \"226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9\" returns successfully" Dec 13 14:26:03.601424 env[1211]: time="2024-12-13T14:26:03.601397240Z" level=info msg="StopPodSandbox for \"da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5\"" Dec 13 14:26:03.601477 env[1211]: time="2024-12-13T14:26:03.601460742Z" level=info msg="Container to stop \"7b344b7dfe2f3820e759d82369ce0ff7e05c98d0a3d082414e09970e1b1505fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:26:03.601505 env[1211]: time="2024-12-13T14:26:03.601476263Z" level=info msg="Container to stop \"df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:26:03.601505 env[1211]: time="2024-12-13T14:26:03.601489398Z" level=info msg="Container to stop \"3d1f263590c9e05ecf0e40fc6af4baae3befd5874125ea0f55eafce4d60b9e00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:26:03.601505 env[1211]: time="2024-12-13T14:26:03.601500660Z" level=info msg="Container to stop \"dd381fb3658e729eeffc24e50983764cdd614e659898a7011661becc1a371e8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:26:03.601611 env[1211]: time="2024-12-13T14:26:03.601509206Z" level=info msg="Container to stop \"226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:26:03.603478 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5-shm.mount: Deactivated successfully. Dec 13 14:26:03.607464 systemd[1]: cri-containerd-da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5.scope: Deactivated successfully. Dec 13 14:26:03.622754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5-rootfs.mount: Deactivated successfully. Dec 13 14:26:03.733349 env[1211]: time="2024-12-13T14:26:03.733274618Z" level=info msg="shim disconnected" id=da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5 Dec 13 14:26:03.733349 env[1211]: time="2024-12-13T14:26:03.733340917Z" level=warning msg="cleaning up after shim disconnected" id=da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5 namespace=k8s.io Dec 13 14:26:03.733349 env[1211]: time="2024-12-13T14:26:03.733353651Z" level=info msg="cleaning up dead shim" Dec 13 14:26:03.740530 env[1211]: time="2024-12-13T14:26:03.740455815Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2955 runtime=io.containerd.runc.v2\n" Dec 13 14:26:03.740881 env[1211]: time="2024-12-13T14:26:03.740850727Z" level=info msg="TearDown network for sandbox \"da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5\" successfully" Dec 13 14:26:03.740920 env[1211]: time="2024-12-13T14:26:03.740881768Z" level=info msg="StopPodSandbox for \"da5c4e3916f55a16024f5008ebc9bcea4692ad3b69387ddf95ece32ea81846c5\" returns successfully" Dec 13 14:26:03.820525 kubelet[1419]: I1213 14:26:03.820496 1419 scope.go:117] "RemoveContainer" containerID="226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9" Dec 13 14:26:03.821598 env[1211]: time="2024-12-13T14:26:03.821563936Z" level=info msg="RemoveContainer for \"226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9\"" Dec 13 14:26:03.891410 kubelet[1419]: I1213 14:26:03.891226 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-xtables-lock\") pod \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " Dec 13 14:26:03.891410 kubelet[1419]: I1213 14:26:03.891292 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-cni-path\") pod \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " Dec 13 14:26:03.891410 kubelet[1419]: I1213 14:26:03.891324 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-host-proc-sys-net\") pod \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " Dec 13 14:26:03.891410 kubelet[1419]: I1213 14:26:03.891351 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-hostproc\") pod \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " Dec 13 14:26:03.891410 kubelet[1419]: I1213 14:26:03.891369 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-cilium-cgroup\") pod \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " Dec 13 14:26:03.891410 kubelet[1419]: I1213 14:26:03.891366 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "56ca637a-6110-49ff-90b0-6cddf7e7fb82" (UID: "56ca637a-6110-49ff-90b0-6cddf7e7fb82"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:03.891797 kubelet[1419]: I1213 14:26:03.891385 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-host-proc-sys-kernel\") pod \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " Dec 13 14:26:03.891797 kubelet[1419]: I1213 14:26:03.891412 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrxjf\" (UniqueName: \"kubernetes.io/projected/56ca637a-6110-49ff-90b0-6cddf7e7fb82-kube-api-access-jrxjf\") pod \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " Dec 13 14:26:03.891797 kubelet[1419]: I1213 14:26:03.891429 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-bpf-maps\") pod \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " Dec 13 14:26:03.891797 kubelet[1419]: I1213 14:26:03.891447 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56ca637a-6110-49ff-90b0-6cddf7e7fb82-clustermesh-secrets\") pod \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " Dec 13 14:26:03.891797 kubelet[1419]: I1213 14:26:03.891464 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-cilium-run\") pod \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " Dec 13 14:26:03.891797 kubelet[1419]: I1213 14:26:03.891480 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-etc-cni-netd\") pod \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " Dec 13 14:26:03.892011 kubelet[1419]: I1213 14:26:03.891380 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-cni-path" (OuterVolumeSpecName: "cni-path") pod "56ca637a-6110-49ff-90b0-6cddf7e7fb82" (UID: "56ca637a-6110-49ff-90b0-6cddf7e7fb82"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:03.892011 kubelet[1419]: I1213 14:26:03.891497 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56ca637a-6110-49ff-90b0-6cddf7e7fb82-hubble-tls\") pod \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " Dec 13 14:26:03.892011 kubelet[1419]: I1213 14:26:03.891408 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-hostproc" (OuterVolumeSpecName: "hostproc") pod "56ca637a-6110-49ff-90b0-6cddf7e7fb82" (UID: "56ca637a-6110-49ff-90b0-6cddf7e7fb82"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:03.892011 kubelet[1419]: I1213 14:26:03.891515 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56ca637a-6110-49ff-90b0-6cddf7e7fb82-cilium-config-path\") pod \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " Dec 13 14:26:03.892011 kubelet[1419]: I1213 14:26:03.891531 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-lib-modules\") pod \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\" (UID: \"56ca637a-6110-49ff-90b0-6cddf7e7fb82\") " Dec 13 14:26:03.892011 kubelet[1419]: I1213 14:26:03.891564 1419 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-cni-path\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:03.892217 kubelet[1419]: I1213 14:26:03.891575 1419 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-hostproc\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:03.892396 kubelet[1419]: I1213 14:26:03.891425 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "56ca637a-6110-49ff-90b0-6cddf7e7fb82" (UID: "56ca637a-6110-49ff-90b0-6cddf7e7fb82"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:03.892479 kubelet[1419]: I1213 14:26:03.891449 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "56ca637a-6110-49ff-90b0-6cddf7e7fb82" (UID: "56ca637a-6110-49ff-90b0-6cddf7e7fb82"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:03.892479 kubelet[1419]: I1213 14:26:03.891465 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "56ca637a-6110-49ff-90b0-6cddf7e7fb82" (UID: "56ca637a-6110-49ff-90b0-6cddf7e7fb82"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:03.892479 kubelet[1419]: I1213 14:26:03.891485 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "56ca637a-6110-49ff-90b0-6cddf7e7fb82" (UID: "56ca637a-6110-49ff-90b0-6cddf7e7fb82"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:03.892479 kubelet[1419]: I1213 14:26:03.891606 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "56ca637a-6110-49ff-90b0-6cddf7e7fb82" (UID: "56ca637a-6110-49ff-90b0-6cddf7e7fb82"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:03.892479 kubelet[1419]: I1213 14:26:03.892410 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "56ca637a-6110-49ff-90b0-6cddf7e7fb82" (UID: "56ca637a-6110-49ff-90b0-6cddf7e7fb82"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:03.892660 kubelet[1419]: I1213 14:26:03.892432 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "56ca637a-6110-49ff-90b0-6cddf7e7fb82" (UID: "56ca637a-6110-49ff-90b0-6cddf7e7fb82"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:03.894445 kubelet[1419]: I1213 14:26:03.894411 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ca637a-6110-49ff-90b0-6cddf7e7fb82-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "56ca637a-6110-49ff-90b0-6cddf7e7fb82" (UID: "56ca637a-6110-49ff-90b0-6cddf7e7fb82"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:26:03.895087 kubelet[1419]: I1213 14:26:03.895059 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56ca637a-6110-49ff-90b0-6cddf7e7fb82-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "56ca637a-6110-49ff-90b0-6cddf7e7fb82" (UID: "56ca637a-6110-49ff-90b0-6cddf7e7fb82"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:26:03.895180 kubelet[1419]: I1213 14:26:03.895133 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56ca637a-6110-49ff-90b0-6cddf7e7fb82-kube-api-access-jrxjf" (OuterVolumeSpecName: "kube-api-access-jrxjf") pod "56ca637a-6110-49ff-90b0-6cddf7e7fb82" (UID: "56ca637a-6110-49ff-90b0-6cddf7e7fb82"). InnerVolumeSpecName "kube-api-access-jrxjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:26:03.895332 kubelet[1419]: I1213 14:26:03.895306 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56ca637a-6110-49ff-90b0-6cddf7e7fb82-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "56ca637a-6110-49ff-90b0-6cddf7e7fb82" (UID: "56ca637a-6110-49ff-90b0-6cddf7e7fb82"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:26:03.896425 systemd[1]: var-lib-kubelet-pods-56ca637a\x2d6110\x2d49ff\x2d90b0\x2d6cddf7e7fb82-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djrxjf.mount: Deactivated successfully. Dec 13 14:26:03.896518 systemd[1]: var-lib-kubelet-pods-56ca637a\x2d6110\x2d49ff\x2d90b0\x2d6cddf7e7fb82-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:26:03.896591 systemd[1]: var-lib-kubelet-pods-56ca637a\x2d6110\x2d49ff\x2d90b0\x2d6cddf7e7fb82-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:26:03.970379 env[1211]: time="2024-12-13T14:26:03.970305661Z" level=info msg="RemoveContainer for \"226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9\" returns successfully" Dec 13 14:26:03.970706 kubelet[1419]: I1213 14:26:03.970652 1419 scope.go:117] "RemoveContainer" containerID="df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b" Dec 13 14:26:03.971754 env[1211]: time="2024-12-13T14:26:03.971703081Z" level=info msg="RemoveContainer for \"df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b\"" Dec 13 14:26:03.992075 kubelet[1419]: I1213 14:26:03.992025 1419 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-xtables-lock\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:03.992075 kubelet[1419]: I1213 14:26:03.992054 1419 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-host-proc-sys-net\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:03.992075 kubelet[1419]: I1213 14:26:03.992066 1419 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jrxjf\" (UniqueName: \"kubernetes.io/projected/56ca637a-6110-49ff-90b0-6cddf7e7fb82-kube-api-access-jrxjf\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:03.992075 kubelet[1419]: I1213 14:26:03.992076 1419 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-bpf-maps\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:03.992323 kubelet[1419]: I1213 14:26:03.992085 1419 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56ca637a-6110-49ff-90b0-6cddf7e7fb82-clustermesh-secrets\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:03.992323 kubelet[1419]: I1213 14:26:03.992094 1419 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-cilium-cgroup\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:03.992323 kubelet[1419]: I1213 14:26:03.992103 1419 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-host-proc-sys-kernel\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:03.992323 kubelet[1419]: I1213 14:26:03.992111 1419 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-lib-modules\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:03.992323 kubelet[1419]: I1213 14:26:03.992128 1419 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-cilium-run\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:03.992323 kubelet[1419]: I1213 14:26:03.992150 1419 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56ca637a-6110-49ff-90b0-6cddf7e7fb82-etc-cni-netd\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:03.992323 kubelet[1419]: I1213 14:26:03.992159 1419 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56ca637a-6110-49ff-90b0-6cddf7e7fb82-hubble-tls\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:03.992323 kubelet[1419]: I1213 14:26:03.992167 1419 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56ca637a-6110-49ff-90b0-6cddf7e7fb82-cilium-config-path\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:04.048144 env[1211]: time="2024-12-13T14:26:04.048071622Z" level=info msg="RemoveContainer for \"df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b\" returns successfully" Dec 13 14:26:04.048479 kubelet[1419]: I1213 14:26:04.048448 1419 scope.go:117] "RemoveContainer" containerID="dd381fb3658e729eeffc24e50983764cdd614e659898a7011661becc1a371e8b" Dec 13 14:26:04.049758 env[1211]: time="2024-12-13T14:26:04.049736627Z" level=info msg="RemoveContainer for \"dd381fb3658e729eeffc24e50983764cdd614e659898a7011661becc1a371e8b\"" Dec 13 14:26:04.099291 env[1211]: time="2024-12-13T14:26:04.099205403Z" level=info msg="RemoveContainer for \"dd381fb3658e729eeffc24e50983764cdd614e659898a7011661becc1a371e8b\" returns successfully" Dec 13 14:26:04.099525 kubelet[1419]: I1213 14:26:04.099494 1419 scope.go:117] "RemoveContainer" containerID="3d1f263590c9e05ecf0e40fc6af4baae3befd5874125ea0f55eafce4d60b9e00" Dec 13 14:26:04.100919 env[1211]: time="2024-12-13T14:26:04.100881480Z" level=info msg="RemoveContainer for \"3d1f263590c9e05ecf0e40fc6af4baae3befd5874125ea0f55eafce4d60b9e00\"" Dec 13 14:26:04.125603 systemd[1]: Removed slice kubepods-burstable-pod56ca637a_6110_49ff_90b0_6cddf7e7fb82.slice. Dec 13 14:26:04.125690 systemd[1]: kubepods-burstable-pod56ca637a_6110_49ff_90b0_6cddf7e7fb82.slice: Consumed 7.359s CPU time. Dec 13 14:26:04.136311 env[1211]: time="2024-12-13T14:26:04.136230504Z" level=info msg="RemoveContainer for \"3d1f263590c9e05ecf0e40fc6af4baae3befd5874125ea0f55eafce4d60b9e00\" returns successfully" Dec 13 14:26:04.136648 kubelet[1419]: I1213 14:26:04.136600 1419 scope.go:117] "RemoveContainer" containerID="7b344b7dfe2f3820e759d82369ce0ff7e05c98d0a3d082414e09970e1b1505fb" Dec 13 14:26:04.138225 env[1211]: time="2024-12-13T14:26:04.138182183Z" level=info msg="RemoveContainer for \"7b344b7dfe2f3820e759d82369ce0ff7e05c98d0a3d082414e09970e1b1505fb\"" Dec 13 14:26:04.145076 env[1211]: time="2024-12-13T14:26:04.144934330Z" level=info msg="RemoveContainer for \"7b344b7dfe2f3820e759d82369ce0ff7e05c98d0a3d082414e09970e1b1505fb\" returns successfully" Dec 13 14:26:04.145237 kubelet[1419]: I1213 14:26:04.145208 1419 scope.go:117] "RemoveContainer" containerID="226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9" Dec 13 14:26:04.145585 env[1211]: time="2024-12-13T14:26:04.145495353Z" level=error msg="ContainerStatus for \"226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9\": not found" Dec 13 14:26:04.145878 kubelet[1419]: E1213 14:26:04.145846 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9\": not found" containerID="226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9" Dec 13 14:26:04.145968 kubelet[1419]: I1213 14:26:04.145879 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9"} err="failed to get container status \"226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9\": rpc error: code = NotFound desc = an error occurred when try to find container \"226ff5b657c3418066fd35a5555d6a849b725bc3e38910d104548ca6a5df8ad9\": not found" Dec 13 14:26:04.146012 kubelet[1419]: I1213 14:26:04.145968 1419 scope.go:117] "RemoveContainer" containerID="df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b" Dec 13 14:26:04.146161 env[1211]: time="2024-12-13T14:26:04.146102676Z" level=error msg="ContainerStatus for \"df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b\": not found" Dec 13 14:26:04.146297 kubelet[1419]: E1213 14:26:04.146273 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b\": not found" containerID="df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b" Dec 13 14:26:04.146369 kubelet[1419]: I1213 14:26:04.146297 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b"} err="failed to get container status \"df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b\": rpc error: code = NotFound desc = an error occurred when try to find container \"df9882ab366e55bb8470085f8b8f62e9b33e94039b0d518a2eea2be210bc826b\": not found" Dec 13 14:26:04.146369 kubelet[1419]: I1213 14:26:04.146313 1419 scope.go:117] "RemoveContainer" containerID="dd381fb3658e729eeffc24e50983764cdd614e659898a7011661becc1a371e8b" Dec 13 14:26:04.146483 env[1211]: time="2024-12-13T14:26:04.146439516Z" level=error msg="ContainerStatus for \"dd381fb3658e729eeffc24e50983764cdd614e659898a7011661becc1a371e8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd381fb3658e729eeffc24e50983764cdd614e659898a7011661becc1a371e8b\": not found" Dec 13 14:26:04.146629 kubelet[1419]: E1213 14:26:04.146607 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd381fb3658e729eeffc24e50983764cdd614e659898a7011661becc1a371e8b\": not found" containerID="dd381fb3658e729eeffc24e50983764cdd614e659898a7011661becc1a371e8b" Dec 13 14:26:04.146769 kubelet[1419]: I1213 14:26:04.146717 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd381fb3658e729eeffc24e50983764cdd614e659898a7011661becc1a371e8b"} err="failed to get container status \"dd381fb3658e729eeffc24e50983764cdd614e659898a7011661becc1a371e8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd381fb3658e729eeffc24e50983764cdd614e659898a7011661becc1a371e8b\": not found" Dec 13 14:26:04.146850 kubelet[1419]: I1213 14:26:04.146769 1419 scope.go:117] "RemoveContainer" containerID="3d1f263590c9e05ecf0e40fc6af4baae3befd5874125ea0f55eafce4d60b9e00" Dec 13 14:26:04.147022 env[1211]: time="2024-12-13T14:26:04.146952476Z" level=error msg="ContainerStatus for \"3d1f263590c9e05ecf0e40fc6af4baae3befd5874125ea0f55eafce4d60b9e00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3d1f263590c9e05ecf0e40fc6af4baae3befd5874125ea0f55eafce4d60b9e00\": not found" Dec 13 14:26:04.147569 kubelet[1419]: E1213 14:26:04.147109 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3d1f263590c9e05ecf0e40fc6af4baae3befd5874125ea0f55eafce4d60b9e00\": not found" containerID="3d1f263590c9e05ecf0e40fc6af4baae3befd5874125ea0f55eafce4d60b9e00" Dec 13 14:26:04.147569 kubelet[1419]: I1213 14:26:04.147138 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3d1f263590c9e05ecf0e40fc6af4baae3befd5874125ea0f55eafce4d60b9e00"} err="failed to get container status \"3d1f263590c9e05ecf0e40fc6af4baae3befd5874125ea0f55eafce4d60b9e00\": rpc error: code = NotFound desc = an error occurred when try to find container \"3d1f263590c9e05ecf0e40fc6af4baae3befd5874125ea0f55eafce4d60b9e00\": not found" Dec 13 14:26:04.147569 kubelet[1419]: I1213 14:26:04.147167 1419 scope.go:117] "RemoveContainer" containerID="7b344b7dfe2f3820e759d82369ce0ff7e05c98d0a3d082414e09970e1b1505fb" Dec 13 14:26:04.147569 kubelet[1419]: E1213 14:26:04.147462 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b344b7dfe2f3820e759d82369ce0ff7e05c98d0a3d082414e09970e1b1505fb\": not found" containerID="7b344b7dfe2f3820e759d82369ce0ff7e05c98d0a3d082414e09970e1b1505fb" Dec 13 14:26:04.147569 kubelet[1419]: I1213 14:26:04.147490 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b344b7dfe2f3820e759d82369ce0ff7e05c98d0a3d082414e09970e1b1505fb"} err="failed to get container status \"7b344b7dfe2f3820e759d82369ce0ff7e05c98d0a3d082414e09970e1b1505fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b344b7dfe2f3820e759d82369ce0ff7e05c98d0a3d082414e09970e1b1505fb\": not found" Dec 13 14:26:04.147770 env[1211]: time="2024-12-13T14:26:04.147354082Z" level=error msg="ContainerStatus for \"7b344b7dfe2f3820e759d82369ce0ff7e05c98d0a3d082414e09970e1b1505fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b344b7dfe2f3820e759d82369ce0ff7e05c98d0a3d082414e09970e1b1505fb\": not found" Dec 13 14:26:04.384218 kubelet[1419]: E1213 14:26:04.384146 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:05.385037 kubelet[1419]: E1213 14:26:05.384928 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:05.428095 kubelet[1419]: E1213 14:26:05.428044 1419 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:26:05.470481 kubelet[1419]: I1213 14:26:05.470423 1419 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56ca637a-6110-49ff-90b0-6cddf7e7fb82" path="/var/lib/kubelet/pods/56ca637a-6110-49ff-90b0-6cddf7e7fb82/volumes" Dec 13 14:26:06.385667 kubelet[1419]: E1213 14:26:06.385550 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:06.438833 kubelet[1419]: E1213 14:26:06.438774 1419 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56ca637a-6110-49ff-90b0-6cddf7e7fb82" containerName="apply-sysctl-overwrites" Dec 13 14:26:06.438833 kubelet[1419]: E1213 14:26:06.438810 1419 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56ca637a-6110-49ff-90b0-6cddf7e7fb82" containerName="mount-bpf-fs" Dec 13 14:26:06.438833 kubelet[1419]: E1213 14:26:06.438818 1419 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56ca637a-6110-49ff-90b0-6cddf7e7fb82" containerName="cilium-agent" Dec 13 14:26:06.438833 kubelet[1419]: E1213 14:26:06.438826 1419 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56ca637a-6110-49ff-90b0-6cddf7e7fb82" containerName="mount-cgroup" Dec 13 14:26:06.438833 kubelet[1419]: E1213 14:26:06.438832 1419 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56ca637a-6110-49ff-90b0-6cddf7e7fb82" containerName="clean-cilium-state" Dec 13 14:26:06.438833 kubelet[1419]: I1213 14:26:06.438853 1419 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ca637a-6110-49ff-90b0-6cddf7e7fb82" containerName="cilium-agent" Dec 13 14:26:06.444944 systemd[1]: Created slice kubepods-besteffort-pod6036f170_3e0c_444b_9d9d_4c2d01dc27f9.slice. Dec 13 14:26:06.456308 systemd[1]: Created slice kubepods-burstable-pod8317ad18_81dd_4f94_81bd_8a569c10bae8.slice. Dec 13 14:26:06.527515 kubelet[1419]: E1213 14:26:06.527424 1419 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-tdx6d lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-wx4pl" podUID="8317ad18-81dd-4f94-81bd-8a569c10bae8" Dec 13 14:26:06.608115 kubelet[1419]: I1213 14:26:06.608000 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-cni-path\") pod \"cilium-wx4pl\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " pod="kube-system/cilium-wx4pl" Dec 13 14:26:06.608115 kubelet[1419]: I1213 14:26:06.608053 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-lib-modules\") pod \"cilium-wx4pl\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " pod="kube-system/cilium-wx4pl" Dec 13 14:26:06.608115 kubelet[1419]: I1213 14:26:06.608077 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8317ad18-81dd-4f94-81bd-8a569c10bae8-clustermesh-secrets\") pod \"cilium-wx4pl\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " pod="kube-system/cilium-wx4pl" Dec 13 14:26:06.608115 kubelet[1419]: I1213 14:26:06.608115 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-host-proc-sys-kernel\") pod \"cilium-wx4pl\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " pod="kube-system/cilium-wx4pl" Dec 13 14:26:06.608430 kubelet[1419]: I1213 14:26:06.608158 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-hostproc\") pod \"cilium-wx4pl\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " pod="kube-system/cilium-wx4pl" Dec 13 14:26:06.608430 kubelet[1419]: I1213 14:26:06.608182 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-cilium-cgroup\") pod \"cilium-wx4pl\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " pod="kube-system/cilium-wx4pl" Dec 13 14:26:06.608430 kubelet[1419]: I1213 14:26:06.608200 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-xtables-lock\") pod \"cilium-wx4pl\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " pod="kube-system/cilium-wx4pl" Dec 13 14:26:06.608430 kubelet[1419]: I1213 14:26:06.608220 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-host-proc-sys-net\") pod \"cilium-wx4pl\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " pod="kube-system/cilium-wx4pl" Dec 13 14:26:06.608430 kubelet[1419]: I1213 14:26:06.608241 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8317ad18-81dd-4f94-81bd-8a569c10bae8-hubble-tls\") pod \"cilium-wx4pl\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " pod="kube-system/cilium-wx4pl" Dec 13 14:26:06.608430 kubelet[1419]: I1213 14:26:06.608260 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-etc-cni-netd\") pod \"cilium-wx4pl\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " pod="kube-system/cilium-wx4pl" Dec 13 14:26:06.608575 kubelet[1419]: I1213 14:26:06.608276 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6036f170-3e0c-444b-9d9d-4c2d01dc27f9-cilium-config-path\") pod \"cilium-operator-5d85765b45-l5wvd\" (UID: \"6036f170-3e0c-444b-9d9d-4c2d01dc27f9\") " pod="kube-system/cilium-operator-5d85765b45-l5wvd" Dec 13 14:26:06.608575 kubelet[1419]: I1213 14:26:06.608292 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-cilium-run\") pod \"cilium-wx4pl\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " pod="kube-system/cilium-wx4pl" Dec 13 14:26:06.608575 kubelet[1419]: I1213 14:26:06.608357 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-bpf-maps\") pod \"cilium-wx4pl\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " pod="kube-system/cilium-wx4pl" Dec 13 14:26:06.608575 kubelet[1419]: I1213 14:26:06.608385 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8317ad18-81dd-4f94-81bd-8a569c10bae8-cilium-config-path\") pod \"cilium-wx4pl\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " pod="kube-system/cilium-wx4pl" Dec 13 14:26:06.608575 kubelet[1419]: I1213 14:26:06.608403 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8317ad18-81dd-4f94-81bd-8a569c10bae8-cilium-ipsec-secrets\") pod \"cilium-wx4pl\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " pod="kube-system/cilium-wx4pl" Dec 13 14:26:06.608688 kubelet[1419]: I1213 14:26:06.608423 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdx6d\" (UniqueName: \"kubernetes.io/projected/8317ad18-81dd-4f94-81bd-8a569c10bae8-kube-api-access-tdx6d\") pod \"cilium-wx4pl\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " pod="kube-system/cilium-wx4pl" Dec 13 14:26:06.608688 kubelet[1419]: I1213 14:26:06.608443 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65v9p\" (UniqueName: \"kubernetes.io/projected/6036f170-3e0c-444b-9d9d-4c2d01dc27f9-kube-api-access-65v9p\") pod \"cilium-operator-5d85765b45-l5wvd\" (UID: \"6036f170-3e0c-444b-9d9d-4c2d01dc27f9\") " pod="kube-system/cilium-operator-5d85765b45-l5wvd" Dec 13 14:26:06.747527 kubelet[1419]: E1213 14:26:06.747353 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:06.748059 env[1211]: time="2024-12-13T14:26:06.748017873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-l5wvd,Uid:6036f170-3e0c-444b-9d9d-4c2d01dc27f9,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:06.762698 env[1211]: time="2024-12-13T14:26:06.762543788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:06.762698 env[1211]: time="2024-12-13T14:26:06.762600317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:06.762698 env[1211]: time="2024-12-13T14:26:06.762630916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:06.762973 env[1211]: time="2024-12-13T14:26:06.762835441Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ab121320dc1099a7ceff186252aa73592c517c31e649ba1a71346cb3ed8b4dc pid=2984 runtime=io.containerd.runc.v2 Dec 13 14:26:06.775393 systemd[1]: Started cri-containerd-5ab121320dc1099a7ceff186252aa73592c517c31e649ba1a71346cb3ed8b4dc.scope. Dec 13 14:26:06.809810 env[1211]: time="2024-12-13T14:26:06.809746532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-l5wvd,Uid:6036f170-3e0c-444b-9d9d-4c2d01dc27f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ab121320dc1099a7ceff186252aa73592c517c31e649ba1a71346cb3ed8b4dc\"" Dec 13 14:26:06.810491 kubelet[1419]: E1213 14:26:06.810463 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:06.811195 env[1211]: time="2024-12-13T14:26:06.811167963Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:26:06.885588 kubelet[1419]: I1213 14:26:06.885510 1419 setters.go:600] "Node became not ready" node="10.0.0.88" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:26:06Z","lastTransitionTime":"2024-12-13T14:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:26:07.011976 kubelet[1419]: I1213 14:26:07.011472 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-etc-cni-netd\") pod \"8317ad18-81dd-4f94-81bd-8a569c10bae8\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " Dec 13 14:26:07.011976 kubelet[1419]: I1213 14:26:07.011524 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-bpf-maps\") pod \"8317ad18-81dd-4f94-81bd-8a569c10bae8\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " Dec 13 14:26:07.011976 kubelet[1419]: I1213 14:26:07.011549 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8317ad18-81dd-4f94-81bd-8a569c10bae8-clustermesh-secrets\") pod \"8317ad18-81dd-4f94-81bd-8a569c10bae8\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " Dec 13 14:26:07.011976 kubelet[1419]: I1213 14:26:07.011565 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-lib-modules\") pod \"8317ad18-81dd-4f94-81bd-8a569c10bae8\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " Dec 13 14:26:07.011976 kubelet[1419]: I1213 14:26:07.011578 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-hostproc\") pod \"8317ad18-81dd-4f94-81bd-8a569c10bae8\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " Dec 13 14:26:07.011976 kubelet[1419]: I1213 14:26:07.011590 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-cilium-cgroup\") pod \"8317ad18-81dd-4f94-81bd-8a569c10bae8\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " Dec 13 14:26:07.012408 kubelet[1419]: I1213 14:26:07.011605 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8317ad18-81dd-4f94-81bd-8a569c10bae8-hubble-tls\") pod \"8317ad18-81dd-4f94-81bd-8a569c10bae8\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " Dec 13 14:26:07.012408 kubelet[1419]: I1213 14:26:07.011619 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8317ad18-81dd-4f94-81bd-8a569c10bae8-cilium-ipsec-secrets\") pod \"8317ad18-81dd-4f94-81bd-8a569c10bae8\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " Dec 13 14:26:07.012408 kubelet[1419]: I1213 14:26:07.011637 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-xtables-lock\") pod \"8317ad18-81dd-4f94-81bd-8a569c10bae8\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " Dec 13 14:26:07.012408 kubelet[1419]: I1213 14:26:07.011634 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8317ad18-81dd-4f94-81bd-8a569c10bae8" (UID: "8317ad18-81dd-4f94-81bd-8a569c10bae8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:07.012408 kubelet[1419]: I1213 14:26:07.011653 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdx6d\" (UniqueName: \"kubernetes.io/projected/8317ad18-81dd-4f94-81bd-8a569c10bae8-kube-api-access-tdx6d\") pod \"8317ad18-81dd-4f94-81bd-8a569c10bae8\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " Dec 13 14:26:07.012408 kubelet[1419]: I1213 14:26:07.011697 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-cni-path\") pod \"8317ad18-81dd-4f94-81bd-8a569c10bae8\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " Dec 13 14:26:07.012568 kubelet[1419]: I1213 14:26:07.011771 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-host-proc-sys-kernel\") pod \"8317ad18-81dd-4f94-81bd-8a569c10bae8\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " Dec 13 14:26:07.012568 kubelet[1419]: I1213 14:26:07.011796 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-host-proc-sys-net\") pod \"8317ad18-81dd-4f94-81bd-8a569c10bae8\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " Dec 13 14:26:07.012568 kubelet[1419]: I1213 14:26:07.011816 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-cilium-run\") pod \"8317ad18-81dd-4f94-81bd-8a569c10bae8\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " Dec 13 14:26:07.012568 kubelet[1419]: I1213 14:26:07.011845 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8317ad18-81dd-4f94-81bd-8a569c10bae8-cilium-config-path\") pod \"8317ad18-81dd-4f94-81bd-8a569c10bae8\" (UID: \"8317ad18-81dd-4f94-81bd-8a569c10bae8\") " Dec 13 14:26:07.012568 kubelet[1419]: I1213 14:26:07.012107 1419 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-etc-cni-netd\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:07.012568 kubelet[1419]: I1213 14:26:07.011990 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8317ad18-81dd-4f94-81bd-8a569c10bae8" (UID: "8317ad18-81dd-4f94-81bd-8a569c10bae8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:07.012741 kubelet[1419]: I1213 14:26:07.012008 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8317ad18-81dd-4f94-81bd-8a569c10bae8" (UID: "8317ad18-81dd-4f94-81bd-8a569c10bae8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:07.012741 kubelet[1419]: I1213 14:26:07.012018 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-hostproc" (OuterVolumeSpecName: "hostproc") pod "8317ad18-81dd-4f94-81bd-8a569c10bae8" (UID: "8317ad18-81dd-4f94-81bd-8a569c10bae8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:07.012741 kubelet[1419]: I1213 14:26:07.012169 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-cni-path" (OuterVolumeSpecName: "cni-path") pod "8317ad18-81dd-4f94-81bd-8a569c10bae8" (UID: "8317ad18-81dd-4f94-81bd-8a569c10bae8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:07.012741 kubelet[1419]: I1213 14:26:07.012194 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8317ad18-81dd-4f94-81bd-8a569c10bae8" (UID: "8317ad18-81dd-4f94-81bd-8a569c10bae8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:07.012741 kubelet[1419]: I1213 14:26:07.012215 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8317ad18-81dd-4f94-81bd-8a569c10bae8" (UID: "8317ad18-81dd-4f94-81bd-8a569c10bae8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:07.012887 kubelet[1419]: I1213 14:26:07.012233 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8317ad18-81dd-4f94-81bd-8a569c10bae8" (UID: "8317ad18-81dd-4f94-81bd-8a569c10bae8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:07.013459 kubelet[1419]: I1213 14:26:07.013056 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8317ad18-81dd-4f94-81bd-8a569c10bae8" (UID: "8317ad18-81dd-4f94-81bd-8a569c10bae8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:07.013459 kubelet[1419]: I1213 14:26:07.013097 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8317ad18-81dd-4f94-81bd-8a569c10bae8" (UID: "8317ad18-81dd-4f94-81bd-8a569c10bae8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:07.014586 kubelet[1419]: I1213 14:26:07.014548 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8317ad18-81dd-4f94-81bd-8a569c10bae8-kube-api-access-tdx6d" (OuterVolumeSpecName: "kube-api-access-tdx6d") pod "8317ad18-81dd-4f94-81bd-8a569c10bae8" (UID: "8317ad18-81dd-4f94-81bd-8a569c10bae8"). InnerVolumeSpecName "kube-api-access-tdx6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:26:07.014891 kubelet[1419]: I1213 14:26:07.014854 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8317ad18-81dd-4f94-81bd-8a569c10bae8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8317ad18-81dd-4f94-81bd-8a569c10bae8" (UID: "8317ad18-81dd-4f94-81bd-8a569c10bae8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:26:07.015183 kubelet[1419]: I1213 14:26:07.015130 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8317ad18-81dd-4f94-81bd-8a569c10bae8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "8317ad18-81dd-4f94-81bd-8a569c10bae8" (UID: "8317ad18-81dd-4f94-81bd-8a569c10bae8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:26:07.015893 kubelet[1419]: I1213 14:26:07.015863 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8317ad18-81dd-4f94-81bd-8a569c10bae8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8317ad18-81dd-4f94-81bd-8a569c10bae8" (UID: "8317ad18-81dd-4f94-81bd-8a569c10bae8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:26:07.016948 kubelet[1419]: I1213 14:26:07.016917 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8317ad18-81dd-4f94-81bd-8a569c10bae8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8317ad18-81dd-4f94-81bd-8a569c10bae8" (UID: "8317ad18-81dd-4f94-81bd-8a569c10bae8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:26:07.112519 kubelet[1419]: I1213 14:26:07.112435 1419 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-bpf-maps\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:07.112519 kubelet[1419]: I1213 14:26:07.112474 1419 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8317ad18-81dd-4f94-81bd-8a569c10bae8-clustermesh-secrets\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:07.112519 kubelet[1419]: I1213 14:26:07.112496 1419 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-lib-modules\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:07.112519 kubelet[1419]: I1213 14:26:07.112505 1419 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-hostproc\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:07.112519 kubelet[1419]: I1213 14:26:07.112515 1419 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-cilium-cgroup\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:07.112519 kubelet[1419]: I1213 14:26:07.112522 1419 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8317ad18-81dd-4f94-81bd-8a569c10bae8-cilium-ipsec-secrets\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:07.112519 kubelet[1419]: I1213 14:26:07.112530 1419 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-xtables-lock\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:07.112519 kubelet[1419]: I1213 14:26:07.112537 1419 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8317ad18-81dd-4f94-81bd-8a569c10bae8-hubble-tls\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:07.112992 kubelet[1419]: I1213 14:26:07.112546 1419 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tdx6d\" (UniqueName: \"kubernetes.io/projected/8317ad18-81dd-4f94-81bd-8a569c10bae8-kube-api-access-tdx6d\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:07.112992 kubelet[1419]: I1213 14:26:07.112554 1419 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-cni-path\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:07.112992 kubelet[1419]: I1213 14:26:07.112561 1419 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-host-proc-sys-kernel\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:07.112992 kubelet[1419]: I1213 14:26:07.112572 1419 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-host-proc-sys-net\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:07.112992 kubelet[1419]: I1213 14:26:07.112580 1419 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8317ad18-81dd-4f94-81bd-8a569c10bae8-cilium-run\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:07.112992 kubelet[1419]: I1213 14:26:07.112587 1419 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8317ad18-81dd-4f94-81bd-8a569c10bae8-cilium-config-path\") on node \"10.0.0.88\" DevicePath \"\"" Dec 13 14:26:07.386230 kubelet[1419]: E1213 14:26:07.386187 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:07.472974 systemd[1]: Removed slice kubepods-burstable-pod8317ad18_81dd_4f94_81bd_8a569c10bae8.slice. Dec 13 14:26:07.717041 systemd[1]: var-lib-kubelet-pods-8317ad18\x2d81dd\x2d4f94\x2d81bd\x2d8a569c10bae8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtdx6d.mount: Deactivated successfully. Dec 13 14:26:07.717160 systemd[1]: var-lib-kubelet-pods-8317ad18\x2d81dd\x2d4f94\x2d81bd\x2d8a569c10bae8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:26:07.717230 systemd[1]: var-lib-kubelet-pods-8317ad18\x2d81dd\x2d4f94\x2d81bd\x2d8a569c10bae8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:26:07.717301 systemd[1]: var-lib-kubelet-pods-8317ad18\x2d81dd\x2d4f94\x2d81bd\x2d8a569c10bae8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:26:07.894903 systemd[1]: Created slice kubepods-burstable-podc4f951c1_9daf_427b_b4d9_a0bf41dcad1b.slice. Dec 13 14:26:08.017555 kubelet[1419]: I1213 14:26:08.017346 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c4f951c1-9daf-427b-b4d9-a0bf41dcad1b-cilium-ipsec-secrets\") pod \"cilium-hjk4s\" (UID: \"c4f951c1-9daf-427b-b4d9-a0bf41dcad1b\") " pod="kube-system/cilium-hjk4s" Dec 13 14:26:08.017555 kubelet[1419]: I1213 14:26:08.017432 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c4f951c1-9daf-427b-b4d9-a0bf41dcad1b-host-proc-sys-kernel\") pod \"cilium-hjk4s\" (UID: \"c4f951c1-9daf-427b-b4d9-a0bf41dcad1b\") " pod="kube-system/cilium-hjk4s" Dec 13 14:26:08.017555 kubelet[1419]: I1213 14:26:08.017465 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c4f951c1-9daf-427b-b4d9-a0bf41dcad1b-cni-path\") pod \"cilium-hjk4s\" (UID: \"c4f951c1-9daf-427b-b4d9-a0bf41dcad1b\") " pod="kube-system/cilium-hjk4s" Dec 13 14:26:08.017555 kubelet[1419]: I1213 14:26:08.017479 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4f951c1-9daf-427b-b4d9-a0bf41dcad1b-cilium-config-path\") pod \"cilium-hjk4s\" (UID: \"c4f951c1-9daf-427b-b4d9-a0bf41dcad1b\") " pod="kube-system/cilium-hjk4s" Dec 13 14:26:08.017555 kubelet[1419]: I1213 14:26:08.017492 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4f951c1-9daf-427b-b4d9-a0bf41dcad1b-lib-modules\") pod \"cilium-hjk4s\" (UID: \"c4f951c1-9daf-427b-b4d9-a0bf41dcad1b\") " pod="kube-system/cilium-hjk4s" Dec 13 14:26:08.017555 kubelet[1419]: I1213 14:26:08.017507 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c4f951c1-9daf-427b-b4d9-a0bf41dcad1b-bpf-maps\") pod \"cilium-hjk4s\" (UID: \"c4f951c1-9daf-427b-b4d9-a0bf41dcad1b\") " pod="kube-system/cilium-hjk4s" Dec 13 14:26:08.017900 kubelet[1419]: I1213 14:26:08.017518 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4f951c1-9daf-427b-b4d9-a0bf41dcad1b-etc-cni-netd\") pod \"cilium-hjk4s\" (UID: \"c4f951c1-9daf-427b-b4d9-a0bf41dcad1b\") " pod="kube-system/cilium-hjk4s" Dec 13 14:26:08.017900 kubelet[1419]: I1213 14:26:08.017542 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c4f951c1-9daf-427b-b4d9-a0bf41dcad1b-clustermesh-secrets\") pod \"cilium-hjk4s\" (UID: \"c4f951c1-9daf-427b-b4d9-a0bf41dcad1b\") " pod="kube-system/cilium-hjk4s" Dec 13 14:26:08.017900 kubelet[1419]: I1213 14:26:08.017562 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c4f951c1-9daf-427b-b4d9-a0bf41dcad1b-host-proc-sys-net\") pod \"cilium-hjk4s\" (UID: \"c4f951c1-9daf-427b-b4d9-a0bf41dcad1b\") " pod="kube-system/cilium-hjk4s" Dec 13 14:26:08.017900 kubelet[1419]: I1213 14:26:08.017576 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c4f951c1-9daf-427b-b4d9-a0bf41dcad1b-hubble-tls\") pod \"cilium-hjk4s\" (UID: \"c4f951c1-9daf-427b-b4d9-a0bf41dcad1b\") " pod="kube-system/cilium-hjk4s" Dec 13 14:26:08.017900 kubelet[1419]: I1213 14:26:08.017595 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzlk9\" (UniqueName: \"kubernetes.io/projected/c4f951c1-9daf-427b-b4d9-a0bf41dcad1b-kube-api-access-mzlk9\") pod \"cilium-hjk4s\" (UID: \"c4f951c1-9daf-427b-b4d9-a0bf41dcad1b\") " pod="kube-system/cilium-hjk4s" Dec 13 14:26:08.017900 kubelet[1419]: I1213 14:26:08.017608 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c4f951c1-9daf-427b-b4d9-a0bf41dcad1b-cilium-run\") pod \"cilium-hjk4s\" (UID: \"c4f951c1-9daf-427b-b4d9-a0bf41dcad1b\") " pod="kube-system/cilium-hjk4s" Dec 13 14:26:08.018107 kubelet[1419]: I1213 14:26:08.017623 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c4f951c1-9daf-427b-b4d9-a0bf41dcad1b-hostproc\") pod \"cilium-hjk4s\" (UID: \"c4f951c1-9daf-427b-b4d9-a0bf41dcad1b\") " pod="kube-system/cilium-hjk4s" Dec 13 14:26:08.018107 kubelet[1419]: I1213 14:26:08.017657 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c4f951c1-9daf-427b-b4d9-a0bf41dcad1b-cilium-cgroup\") pod \"cilium-hjk4s\" (UID: \"c4f951c1-9daf-427b-b4d9-a0bf41dcad1b\") " pod="kube-system/cilium-hjk4s" Dec 13 14:26:08.018107 kubelet[1419]: I1213 14:26:08.017693 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4f951c1-9daf-427b-b4d9-a0bf41dcad1b-xtables-lock\") pod \"cilium-hjk4s\" (UID: \"c4f951c1-9daf-427b-b4d9-a0bf41dcad1b\") " pod="kube-system/cilium-hjk4s" Dec 13 14:26:08.202149 kubelet[1419]: E1213 14:26:08.202102 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:08.202563 env[1211]: time="2024-12-13T14:26:08.202522794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hjk4s,Uid:c4f951c1-9daf-427b-b4d9-a0bf41dcad1b,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:08.252878 env[1211]: time="2024-12-13T14:26:08.252808714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:08.252878 env[1211]: time="2024-12-13T14:26:08.252847328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:08.252878 env[1211]: time="2024-12-13T14:26:08.252857689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:08.253131 env[1211]: time="2024-12-13T14:26:08.252993651Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1d5c43bd6d706ac0eabd9b3a140b8b977edcde55169d7bc1821aa884404cf6b pid=3032 runtime=io.containerd.runc.v2 Dec 13 14:26:08.265090 systemd[1]: Started cri-containerd-c1d5c43bd6d706ac0eabd9b3a140b8b977edcde55169d7bc1821aa884404cf6b.scope. Dec 13 14:26:08.289402 env[1211]: time="2024-12-13T14:26:08.289242056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hjk4s,Uid:c4f951c1-9daf-427b-b4d9-a0bf41dcad1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1d5c43bd6d706ac0eabd9b3a140b8b977edcde55169d7bc1821aa884404cf6b\"" Dec 13 14:26:08.290139 kubelet[1419]: E1213 14:26:08.290114 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:08.292260 env[1211]: time="2024-12-13T14:26:08.292221603Z" level=info msg="CreateContainer within sandbox \"c1d5c43bd6d706ac0eabd9b3a140b8b977edcde55169d7bc1821aa884404cf6b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:26:08.304179 env[1211]: time="2024-12-13T14:26:08.304128826Z" level=info msg="CreateContainer within sandbox \"c1d5c43bd6d706ac0eabd9b3a140b8b977edcde55169d7bc1821aa884404cf6b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"86b5106697ba57e17675f746c14c183ad3024b8ac64a62194cbf3e95725d6c3e\"" Dec 13 14:26:08.304635 env[1211]: time="2024-12-13T14:26:08.304595054Z" level=info msg="StartContainer for \"86b5106697ba57e17675f746c14c183ad3024b8ac64a62194cbf3e95725d6c3e\"" Dec 13 14:26:08.318752 systemd[1]: Started cri-containerd-86b5106697ba57e17675f746c14c183ad3024b8ac64a62194cbf3e95725d6c3e.scope. Dec 13 14:26:08.342884 env[1211]: time="2024-12-13T14:26:08.342835123Z" level=info msg="StartContainer for \"86b5106697ba57e17675f746c14c183ad3024b8ac64a62194cbf3e95725d6c3e\" returns successfully" Dec 13 14:26:08.350481 systemd[1]: cri-containerd-86b5106697ba57e17675f746c14c183ad3024b8ac64a62194cbf3e95725d6c3e.scope: Deactivated successfully. Dec 13 14:26:08.377659 env[1211]: time="2024-12-13T14:26:08.377611024Z" level=info msg="shim disconnected" id=86b5106697ba57e17675f746c14c183ad3024b8ac64a62194cbf3e95725d6c3e Dec 13 14:26:08.377659 env[1211]: time="2024-12-13T14:26:08.377659126Z" level=warning msg="cleaning up after shim disconnected" id=86b5106697ba57e17675f746c14c183ad3024b8ac64a62194cbf3e95725d6c3e namespace=k8s.io Dec 13 14:26:08.377849 env[1211]: time="2024-12-13T14:26:08.377669226Z" level=info msg="cleaning up dead shim" Dec 13 14:26:08.384701 env[1211]: time="2024-12-13T14:26:08.384640688Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3114 runtime=io.containerd.runc.v2\n" Dec 13 14:26:08.386353 kubelet[1419]: E1213 14:26:08.386295 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:08.832836 kubelet[1419]: E1213 14:26:08.832794 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:08.834473 env[1211]: time="2024-12-13T14:26:08.834403081Z" level=info msg="CreateContainer within sandbox \"c1d5c43bd6d706ac0eabd9b3a140b8b977edcde55169d7bc1821aa884404cf6b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:26:08.846716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1200848453.mount: Deactivated successfully. Dec 13 14:26:08.848093 env[1211]: time="2024-12-13T14:26:08.848043823Z" level=info msg="CreateContainer within sandbox \"c1d5c43bd6d706ac0eabd9b3a140b8b977edcde55169d7bc1821aa884404cf6b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"317d4704d4a5653cb1b32b038f31854f5859ee935b97091311f88abb75838bf5\"" Dec 13 14:26:08.848712 env[1211]: time="2024-12-13T14:26:08.848646473Z" level=info msg="StartContainer for \"317d4704d4a5653cb1b32b038f31854f5859ee935b97091311f88abb75838bf5\"" Dec 13 14:26:08.874160 systemd[1]: Started cri-containerd-317d4704d4a5653cb1b32b038f31854f5859ee935b97091311f88abb75838bf5.scope. Dec 13 14:26:08.899431 env[1211]: time="2024-12-13T14:26:08.899343425Z" level=info msg="StartContainer for \"317d4704d4a5653cb1b32b038f31854f5859ee935b97091311f88abb75838bf5\" returns successfully" Dec 13 14:26:08.903145 systemd[1]: cri-containerd-317d4704d4a5653cb1b32b038f31854f5859ee935b97091311f88abb75838bf5.scope: Deactivated successfully. Dec 13 14:26:08.946547 env[1211]: time="2024-12-13T14:26:08.946496033Z" level=info msg="shim disconnected" id=317d4704d4a5653cb1b32b038f31854f5859ee935b97091311f88abb75838bf5 Dec 13 14:26:08.946547 env[1211]: time="2024-12-13T14:26:08.946541380Z" level=warning msg="cleaning up after shim disconnected" id=317d4704d4a5653cb1b32b038f31854f5859ee935b97091311f88abb75838bf5 namespace=k8s.io Dec 13 14:26:08.946547 env[1211]: time="2024-12-13T14:26:08.946549547Z" level=info msg="cleaning up dead shim" Dec 13 14:26:08.953087 env[1211]: time="2024-12-13T14:26:08.953046464Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3175 runtime=io.containerd.runc.v2\n" Dec 13 14:26:09.387078 kubelet[1419]: E1213 14:26:09.387019 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:09.449847 env[1211]: time="2024-12-13T14:26:09.449758690Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:09.452476 env[1211]: time="2024-12-13T14:26:09.452415041Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:09.454828 env[1211]: time="2024-12-13T14:26:09.454766195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:09.455385 env[1211]: time="2024-12-13T14:26:09.455349507Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:26:09.457933 env[1211]: time="2024-12-13T14:26:09.457884524Z" level=info msg="CreateContainer within sandbox \"5ab121320dc1099a7ceff186252aa73592c517c31e649ba1a71346cb3ed8b4dc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:26:09.470245 kubelet[1419]: I1213 14:26:09.470184 1419 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8317ad18-81dd-4f94-81bd-8a569c10bae8" path="/var/lib/kubelet/pods/8317ad18-81dd-4f94-81bd-8a569c10bae8/volumes" Dec 13 14:26:09.473110 env[1211]: time="2024-12-13T14:26:09.473049981Z" level=info msg="CreateContainer within sandbox \"5ab121320dc1099a7ceff186252aa73592c517c31e649ba1a71346cb3ed8b4dc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f08a7ac7d03a9d47508beb18d2d0f75e1dbbef50d8f3c4021477533fecd12bcb\"" Dec 13 14:26:09.473428 env[1211]: time="2024-12-13T14:26:09.473393622Z" level=info msg="StartContainer for \"f08a7ac7d03a9d47508beb18d2d0f75e1dbbef50d8f3c4021477533fecd12bcb\"" Dec 13 14:26:09.494553 systemd[1]: Started cri-containerd-f08a7ac7d03a9d47508beb18d2d0f75e1dbbef50d8f3c4021477533fecd12bcb.scope. Dec 13 14:26:09.641641 env[1211]: time="2024-12-13T14:26:09.641468822Z" level=info msg="StartContainer for \"f08a7ac7d03a9d47508beb18d2d0f75e1dbbef50d8f3c4021477533fecd12bcb\" returns successfully" Dec 13 14:26:09.716732 systemd[1]: run-containerd-runc-k8s.io-317d4704d4a5653cb1b32b038f31854f5859ee935b97091311f88abb75838bf5-runc.jyDOgJ.mount: Deactivated successfully. Dec 13 14:26:09.716811 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-317d4704d4a5653cb1b32b038f31854f5859ee935b97091311f88abb75838bf5-rootfs.mount: Deactivated successfully. Dec 13 14:26:09.836011 kubelet[1419]: E1213 14:26:09.835971 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:09.837830 kubelet[1419]: E1213 14:26:09.837814 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:09.840036 env[1211]: time="2024-12-13T14:26:09.839976580Z" level=info msg="CreateContainer within sandbox \"c1d5c43bd6d706ac0eabd9b3a140b8b977edcde55169d7bc1821aa884404cf6b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:26:09.844207 kubelet[1419]: I1213 14:26:09.844159 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-l5wvd" podStartSLOduration=1.198567416 podStartE2EDuration="3.844145269s" podCreationTimestamp="2024-12-13 14:26:06 +0000 UTC" firstStartedPulling="2024-12-13 14:26:06.810890888 +0000 UTC m=+71.893384983" lastFinishedPulling="2024-12-13 14:26:09.456468751 +0000 UTC m=+74.538962836" observedRunningTime="2024-12-13 14:26:09.844063502 +0000 UTC m=+74.926557577" watchObservedRunningTime="2024-12-13 14:26:09.844145269 +0000 UTC m=+74.926639354" Dec 13 14:26:09.857497 env[1211]: time="2024-12-13T14:26:09.857451299Z" level=info msg="CreateContainer within sandbox \"c1d5c43bd6d706ac0eabd9b3a140b8b977edcde55169d7bc1821aa884404cf6b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"673f0228a27cb25cc2d2e8ec8ce5d86a1c22d86a0de68c0c35fce4d8f99777c5\"" Dec 13 14:26:09.857912 env[1211]: time="2024-12-13T14:26:09.857877238Z" level=info msg="StartContainer for \"673f0228a27cb25cc2d2e8ec8ce5d86a1c22d86a0de68c0c35fce4d8f99777c5\"" Dec 13 14:26:09.874609 systemd[1]: Started cri-containerd-673f0228a27cb25cc2d2e8ec8ce5d86a1c22d86a0de68c0c35fce4d8f99777c5.scope. Dec 13 14:26:09.902789 env[1211]: time="2024-12-13T14:26:09.902309912Z" level=info msg="StartContainer for \"673f0228a27cb25cc2d2e8ec8ce5d86a1c22d86a0de68c0c35fce4d8f99777c5\" returns successfully" Dec 13 14:26:09.908910 systemd[1]: cri-containerd-673f0228a27cb25cc2d2e8ec8ce5d86a1c22d86a0de68c0c35fce4d8f99777c5.scope: Deactivated successfully. Dec 13 14:26:09.928081 env[1211]: time="2024-12-13T14:26:09.928034664Z" level=info msg="shim disconnected" id=673f0228a27cb25cc2d2e8ec8ce5d86a1c22d86a0de68c0c35fce4d8f99777c5 Dec 13 14:26:09.928081 env[1211]: time="2024-12-13T14:26:09.928079211Z" level=warning msg="cleaning up after shim disconnected" id=673f0228a27cb25cc2d2e8ec8ce5d86a1c22d86a0de68c0c35fce4d8f99777c5 namespace=k8s.io Dec 13 14:26:09.928081 env[1211]: time="2024-12-13T14:26:09.928087296Z" level=info msg="cleaning up dead shim" Dec 13 14:26:09.935759 env[1211]: time="2024-12-13T14:26:09.935695383Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3269 runtime=io.containerd.runc.v2\n" Dec 13 14:26:10.387801 kubelet[1419]: E1213 14:26:10.387715 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:10.429574 kubelet[1419]: E1213 14:26:10.429516 1419 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:26:10.716311 systemd[1]: run-containerd-runc-k8s.io-673f0228a27cb25cc2d2e8ec8ce5d86a1c22d86a0de68c0c35fce4d8f99777c5-runc.cd8AIz.mount: Deactivated successfully. Dec 13 14:26:10.716428 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-673f0228a27cb25cc2d2e8ec8ce5d86a1c22d86a0de68c0c35fce4d8f99777c5-rootfs.mount: Deactivated successfully. Dec 13 14:26:10.840946 kubelet[1419]: E1213 14:26:10.840914 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:10.841172 kubelet[1419]: E1213 14:26:10.840971 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:10.842516 env[1211]: time="2024-12-13T14:26:10.842476025Z" level=info msg="CreateContainer within sandbox \"c1d5c43bd6d706ac0eabd9b3a140b8b977edcde55169d7bc1821aa884404cf6b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:26:10.857815 env[1211]: time="2024-12-13T14:26:10.857759299Z" level=info msg="CreateContainer within sandbox \"c1d5c43bd6d706ac0eabd9b3a140b8b977edcde55169d7bc1821aa884404cf6b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1d079fd80bfcaa48934be588a6117cc1e3397e4dc9a1a30c82e6f145fca56ca1\"" Dec 13 14:26:10.858289 env[1211]: time="2024-12-13T14:26:10.858266915Z" level=info msg="StartContainer for \"1d079fd80bfcaa48934be588a6117cc1e3397e4dc9a1a30c82e6f145fca56ca1\"" Dec 13 14:26:10.873705 systemd[1]: Started cri-containerd-1d079fd80bfcaa48934be588a6117cc1e3397e4dc9a1a30c82e6f145fca56ca1.scope. Dec 13 14:26:10.895032 systemd[1]: cri-containerd-1d079fd80bfcaa48934be588a6117cc1e3397e4dc9a1a30c82e6f145fca56ca1.scope: Deactivated successfully. Dec 13 14:26:10.896892 env[1211]: time="2024-12-13T14:26:10.896843112Z" level=info msg="StartContainer for \"1d079fd80bfcaa48934be588a6117cc1e3397e4dc9a1a30c82e6f145fca56ca1\" returns successfully" Dec 13 14:26:10.918328 env[1211]: time="2024-12-13T14:26:10.918269949Z" level=info msg="shim disconnected" id=1d079fd80bfcaa48934be588a6117cc1e3397e4dc9a1a30c82e6f145fca56ca1 Dec 13 14:26:10.918328 env[1211]: time="2024-12-13T14:26:10.918312400Z" level=warning msg="cleaning up after shim disconnected" id=1d079fd80bfcaa48934be588a6117cc1e3397e4dc9a1a30c82e6f145fca56ca1 namespace=k8s.io Dec 13 14:26:10.918328 env[1211]: time="2024-12-13T14:26:10.918320365Z" level=info msg="cleaning up dead shim" Dec 13 14:26:10.924372 env[1211]: time="2024-12-13T14:26:10.924316233Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3325 runtime=io.containerd.runc.v2\n" Dec 13 14:26:11.388268 kubelet[1419]: E1213 14:26:11.388197 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:11.716205 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d079fd80bfcaa48934be588a6117cc1e3397e4dc9a1a30c82e6f145fca56ca1-rootfs.mount: Deactivated successfully. Dec 13 14:26:11.844150 kubelet[1419]: E1213 14:26:11.844118 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:11.845414 env[1211]: time="2024-12-13T14:26:11.845382501Z" level=info msg="CreateContainer within sandbox \"c1d5c43bd6d706ac0eabd9b3a140b8b977edcde55169d7bc1821aa884404cf6b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:26:12.191510 env[1211]: time="2024-12-13T14:26:12.191391108Z" level=info msg="CreateContainer within sandbox \"c1d5c43bd6d706ac0eabd9b3a140b8b977edcde55169d7bc1821aa884404cf6b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bc69962ee3a06d92257e368b7205fac609ccdd070ad583a4c18a6185f1d5ddd6\"" Dec 13 14:26:12.192205 env[1211]: time="2024-12-13T14:26:12.192172529Z" level=info msg="StartContainer for \"bc69962ee3a06d92257e368b7205fac609ccdd070ad583a4c18a6185f1d5ddd6\"" Dec 13 14:26:12.216353 systemd[1]: Started cri-containerd-bc69962ee3a06d92257e368b7205fac609ccdd070ad583a4c18a6185f1d5ddd6.scope. Dec 13 14:26:12.256928 env[1211]: time="2024-12-13T14:26:12.256836929Z" level=info msg="StartContainer for \"bc69962ee3a06d92257e368b7205fac609ccdd070ad583a4c18a6185f1d5ddd6\" returns successfully" Dec 13 14:26:12.388982 kubelet[1419]: E1213 14:26:12.388907 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:12.643767 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:26:12.716321 systemd[1]: run-containerd-runc-k8s.io-bc69962ee3a06d92257e368b7205fac609ccdd070ad583a4c18a6185f1d5ddd6-runc.KA6oee.mount: Deactivated successfully. Dec 13 14:26:12.848742 kubelet[1419]: E1213 14:26:12.848692 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:12.861805 kubelet[1419]: I1213 14:26:12.861752 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hjk4s" podStartSLOduration=5.861719522 podStartE2EDuration="5.861719522s" podCreationTimestamp="2024-12-13 14:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:26:12.861427882 +0000 UTC m=+77.943921977" watchObservedRunningTime="2024-12-13 14:26:12.861719522 +0000 UTC m=+77.944228295" Dec 13 14:26:13.389846 kubelet[1419]: E1213 14:26:13.389787 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:14.203561 kubelet[1419]: E1213 14:26:14.203498 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:14.390578 kubelet[1419]: E1213 14:26:14.390501 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:15.288321 systemd-networkd[1039]: lxc_health: Link UP Dec 13 14:26:15.298029 systemd-networkd[1039]: lxc_health: Gained carrier Dec 13 14:26:15.298764 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:26:15.338408 kubelet[1419]: E1213 14:26:15.338345 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:15.391467 kubelet[1419]: E1213 14:26:15.391420 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:16.204825 kubelet[1419]: E1213 14:26:16.204780 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:16.391830 kubelet[1419]: E1213 14:26:16.391772 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:16.855534 kubelet[1419]: E1213 14:26:16.855492 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:16.875004 systemd[1]: run-containerd-runc-k8s.io-bc69962ee3a06d92257e368b7205fac609ccdd070ad583a4c18a6185f1d5ddd6-runc.FCrta6.mount: Deactivated successfully. Dec 13 14:26:17.182821 systemd-networkd[1039]: lxc_health: Gained IPv6LL Dec 13 14:26:17.392879 kubelet[1419]: E1213 14:26:17.392798 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:17.857518 kubelet[1419]: E1213 14:26:17.857413 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:18.393862 kubelet[1419]: E1213 14:26:18.393713 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:19.394201 kubelet[1419]: E1213 14:26:19.394134 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:20.395202 kubelet[1419]: E1213 14:26:20.395145 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:21.395843 kubelet[1419]: E1213 14:26:21.395787 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:22.396743 kubelet[1419]: E1213 14:26:22.396642 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"