Dec 13 01:58:46.057056 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 01:58:46.057080 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:58:46.057089 kernel: BIOS-provided physical RAM map: Dec 13 01:58:46.057094 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:58:46.057100 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:58:46.057105 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:58:46.057111 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 01:58:46.057117 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 01:58:46.057124 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:58:46.057129 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 01:58:46.057135 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 01:58:46.057140 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:58:46.057146 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 01:58:46.057151 kernel: NX (Execute Disable) protection: active Dec 13 01:58:46.057159 kernel: SMBIOS 2.8 present. Dec 13 01:58:46.057165 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 01:58:46.057171 kernel: Hypervisor detected: KVM Dec 13 01:58:46.057177 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:58:46.057183 kernel: kvm-clock: cpu 0, msr 1319b001, primary cpu clock Dec 13 01:58:46.057188 kernel: kvm-clock: using sched offset of 3019317808 cycles Dec 13 01:58:46.057195 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:58:46.057204 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:58:46.057210 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:58:46.057218 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:58:46.057234 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 01:58:46.057240 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:58:46.057247 kernel: Using GB pages for direct mapping Dec 13 01:58:46.057253 kernel: ACPI: Early table checksum verification disabled Dec 13 01:58:46.057258 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 01:58:46.057265 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:58:46.057271 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:58:46.057277 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:58:46.057284 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 01:58:46.057290 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:58:46.057296 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:58:46.057302 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:58:46.057308 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:58:46.057314 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 01:58:46.057320 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 01:58:46.057326 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 01:58:46.057344 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 01:58:46.057364 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 01:58:46.057373 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 01:58:46.057380 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 01:58:46.057386 kernel: No NUMA configuration found Dec 13 01:58:46.057393 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 01:58:46.057401 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 01:58:46.057407 kernel: Zone ranges: Dec 13 01:58:46.057413 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:58:46.057420 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 01:58:46.057426 kernel: Normal empty Dec 13 01:58:46.057432 kernel: Movable zone start for each node Dec 13 01:58:46.057439 kernel: Early memory node ranges Dec 13 01:58:46.057445 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:58:46.057452 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 01:58:46.057459 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 01:58:46.057469 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:58:46.057475 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:58:46.057482 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 01:58:46.057488 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:58:46.057495 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:58:46.057501 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:58:46.057508 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:58:46.057514 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:58:46.057520 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:58:46.057528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:58:46.057534 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:58:46.057541 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:58:46.057549 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:58:46.057556 kernel: TSC deadline timer available Dec 13 01:58:46.057562 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:58:46.057569 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:58:46.057575 kernel: kvm-guest: setup PV sched yield Dec 13 01:58:46.057581 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 01:58:46.057589 kernel: Booting paravirtualized kernel on KVM Dec 13 01:58:46.057599 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:58:46.057606 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:58:46.057612 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Dec 13 01:58:46.057619 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Dec 13 01:58:46.057636 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:58:46.057642 kernel: kvm-guest: setup async PF for cpu 0 Dec 13 01:58:46.057650 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Dec 13 01:58:46.057658 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:58:46.057666 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:58:46.057673 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 01:58:46.057679 kernel: Policy zone: DMA32 Dec 13 01:58:46.057686 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:58:46.057693 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:58:46.057700 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:58:46.057706 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:58:46.057713 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:58:46.057721 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 134796K reserved, 0K cma-reserved) Dec 13 01:58:46.057728 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:58:46.057734 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 01:58:46.057741 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 01:58:46.057747 kernel: rcu: Hierarchical RCU implementation. Dec 13 01:58:46.057754 kernel: rcu: RCU event tracing is enabled. Dec 13 01:58:46.057760 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:58:46.057767 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:58:46.057773 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:58:46.057781 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:58:46.057787 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:58:46.057794 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:58:46.057800 kernel: random: crng init done Dec 13 01:58:46.057806 kernel: Console: colour VGA+ 80x25 Dec 13 01:58:46.057813 kernel: printk: console [ttyS0] enabled Dec 13 01:58:46.057819 kernel: ACPI: Core revision 20210730 Dec 13 01:58:46.057826 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:58:46.057832 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:58:46.057840 kernel: x2apic enabled Dec 13 01:58:46.057846 kernel: Switched APIC routing to physical x2apic. Dec 13 01:58:46.057852 kernel: kvm-guest: setup PV IPIs Dec 13 01:58:46.057859 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:58:46.057865 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:58:46.057876 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:58:46.057882 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:58:46.057889 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:58:46.057895 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:58:46.057908 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:58:46.057914 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:58:46.057921 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:58:46.057929 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:58:46.057936 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:58:46.057943 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:58:46.057950 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:58:46.057956 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 01:58:46.057963 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:58:46.057984 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:58:46.057991 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:58:46.057997 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:58:46.058004 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 01:58:46.058011 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:58:46.058018 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:58:46.058024 kernel: LSM: Security Framework initializing Dec 13 01:58:46.058031 kernel: SELinux: Initializing. Dec 13 01:58:46.058039 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:58:46.058046 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:58:46.058053 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:58:46.058060 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:58:46.058066 kernel: ... version: 0 Dec 13 01:58:46.058073 kernel: ... bit width: 48 Dec 13 01:58:46.058080 kernel: ... generic registers: 6 Dec 13 01:58:46.058087 kernel: ... value mask: 0000ffffffffffff Dec 13 01:58:46.058093 kernel: ... max period: 00007fffffffffff Dec 13 01:58:46.058101 kernel: ... fixed-purpose events: 0 Dec 13 01:58:46.058108 kernel: ... event mask: 000000000000003f Dec 13 01:58:46.058114 kernel: signal: max sigframe size: 1776 Dec 13 01:58:46.058121 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:58:46.058128 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:58:46.058134 kernel: x86: Booting SMP configuration: Dec 13 01:58:46.058141 kernel: .... node #0, CPUs: #1 Dec 13 01:58:46.058148 kernel: kvm-clock: cpu 1, msr 1319b041, secondary cpu clock Dec 13 01:58:46.058155 kernel: kvm-guest: setup async PF for cpu 1 Dec 13 01:58:46.058163 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Dec 13 01:58:46.058169 kernel: #2 Dec 13 01:58:46.058176 kernel: kvm-clock: cpu 2, msr 1319b081, secondary cpu clock Dec 13 01:58:46.058183 kernel: kvm-guest: setup async PF for cpu 2 Dec 13 01:58:46.058190 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Dec 13 01:58:46.058196 kernel: #3 Dec 13 01:58:46.058203 kernel: kvm-clock: cpu 3, msr 1319b0c1, secondary cpu clock Dec 13 01:58:46.058210 kernel: kvm-guest: setup async PF for cpu 3 Dec 13 01:58:46.058219 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Dec 13 01:58:46.058227 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:58:46.058234 kernel: smpboot: Max logical packages: 1 Dec 13 01:58:46.058241 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:58:46.058247 kernel: devtmpfs: initialized Dec 13 01:58:46.058254 kernel: x86/mm: Memory block size: 128MB Dec 13 01:58:46.058261 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:58:46.058268 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:58:46.058274 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:58:46.058281 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:58:46.058289 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:58:46.058296 kernel: audit: type=2000 audit(1734055125.625:1): state=initialized audit_enabled=0 res=1 Dec 13 01:58:46.058303 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:58:46.058310 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:58:46.058316 kernel: cpuidle: using governor menu Dec 13 01:58:46.058323 kernel: ACPI: bus type PCI registered Dec 13 01:58:46.058330 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:58:46.058337 kernel: dca service started, version 1.12.1 Dec 13 01:58:46.058344 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:58:46.058352 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 01:58:46.058359 kernel: PCI: Using configuration type 1 for base access Dec 13 01:58:46.058366 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:58:46.058373 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:58:46.058380 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:58:46.058386 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:58:46.058393 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:58:46.058400 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:58:46.058406 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:58:46.058414 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 01:58:46.058421 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 01:58:46.058428 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 01:58:46.058434 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:58:46.058441 kernel: ACPI: Interpreter enabled Dec 13 01:58:46.058448 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:58:46.058455 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:58:46.058462 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:58:46.058468 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:58:46.058475 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:58:46.058607 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:58:46.058701 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:58:46.058772 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:58:46.058781 kernel: PCI host bridge to bus 0000:00 Dec 13 01:58:46.058859 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:58:46.058922 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:58:46.059002 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:58:46.059075 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:58:46.059139 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:58:46.059242 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 01:58:46.059315 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:58:46.059405 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:58:46.059484 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:58:46.059559 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 01:58:46.059649 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 01:58:46.059721 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 01:58:46.059791 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:58:46.059876 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:58:46.059947 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 01:58:46.060047 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 01:58:46.060120 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 01:58:46.060205 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:58:46.060277 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 01:58:46.060345 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 01:58:46.060413 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 01:58:46.060487 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:58:46.060560 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 01:58:46.060641 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 01:58:46.060713 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 01:58:46.060781 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 01:58:46.060861 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:58:46.060931 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:58:46.061050 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:58:46.061127 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 01:58:46.061195 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 01:58:46.061269 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:58:46.061337 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 01:58:46.061346 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:58:46.061354 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:58:46.061361 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:58:46.061370 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:58:46.061377 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:58:46.061384 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:58:46.061391 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:58:46.061397 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:58:46.061405 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:58:46.061411 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:58:46.061418 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:58:46.061426 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:58:46.061434 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:58:46.061440 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:58:46.061447 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:58:46.061454 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:58:46.061461 kernel: iommu: Default domain type: Translated Dec 13 01:58:46.061468 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:58:46.061536 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:58:46.061606 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:58:46.061686 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:58:46.061698 kernel: vgaarb: loaded Dec 13 01:58:46.061705 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:58:46.061712 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:58:46.061719 kernel: PTP clock support registered Dec 13 01:58:46.061726 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:58:46.061733 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:58:46.061740 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:58:46.061747 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 01:58:46.061754 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:58:46.061762 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:58:46.061769 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:58:46.061776 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:58:46.061782 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:58:46.061789 kernel: pnp: PnP ACPI init Dec 13 01:58:46.061869 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:58:46.061879 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:58:46.061886 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:58:46.061895 kernel: NET: Registered PF_INET protocol family Dec 13 01:58:46.061902 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:58:46.061909 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:58:46.061916 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:58:46.061923 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:58:46.061930 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 01:58:46.061937 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:58:46.061944 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:58:46.061951 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:58:46.061959 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:58:46.061976 kernel: NET: Registered PF_XDP protocol family Dec 13 01:58:46.062041 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:58:46.062103 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:58:46.062171 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:58:46.062253 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:58:46.062331 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:58:46.062395 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 01:58:46.062407 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:58:46.062414 kernel: Initialise system trusted keyrings Dec 13 01:58:46.062421 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:58:46.062427 kernel: Key type asymmetric registered Dec 13 01:58:46.062434 kernel: Asymmetric key parser 'x509' registered Dec 13 01:58:46.062441 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 01:58:46.062448 kernel: io scheduler mq-deadline registered Dec 13 01:58:46.062455 kernel: io scheduler kyber registered Dec 13 01:58:46.062462 kernel: io scheduler bfq registered Dec 13 01:58:46.062470 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:58:46.062477 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:58:46.062485 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:58:46.062492 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:58:46.062499 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:58:46.062506 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:58:46.062513 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:58:46.062520 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:58:46.062526 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:58:46.062534 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:58:46.062615 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:58:46.062699 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:58:46.062765 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:58:45 UTC (1734055125) Dec 13 01:58:46.062841 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:58:46.062851 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:58:46.062858 kernel: Segment Routing with IPv6 Dec 13 01:58:46.062868 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:58:46.062877 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:58:46.062884 kernel: Key type dns_resolver registered Dec 13 01:58:46.062891 kernel: IPI shorthand broadcast: enabled Dec 13 01:58:46.062898 kernel: sched_clock: Marking stable (447267044, 101949883)->(565965157, -16748230) Dec 13 01:58:46.062905 kernel: registered taskstats version 1 Dec 13 01:58:46.062912 kernel: Loading compiled-in X.509 certificates Dec 13 01:58:46.062919 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 01:58:46.062926 kernel: Key type .fscrypt registered Dec 13 01:58:46.062932 kernel: Key type fscrypt-provisioning registered Dec 13 01:58:46.062941 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:58:46.062948 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:58:46.062954 kernel: ima: No architecture policies found Dec 13 01:58:46.062961 kernel: clk: Disabling unused clocks Dec 13 01:58:46.063013 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 01:58:46.063029 kernel: Write protecting the kernel read-only data: 28672k Dec 13 01:58:46.063036 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 01:58:46.063043 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 01:58:46.063050 kernel: Run /init as init process Dec 13 01:58:46.063058 kernel: with arguments: Dec 13 01:58:46.063065 kernel: /init Dec 13 01:58:46.063072 kernel: with environment: Dec 13 01:58:46.063078 kernel: HOME=/ Dec 13 01:58:46.063085 kernel: TERM=linux Dec 13 01:58:46.063092 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:58:46.063101 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:58:46.063111 systemd[1]: Detected virtualization kvm. Dec 13 01:58:46.063120 systemd[1]: Detected architecture x86-64. Dec 13 01:58:46.063127 systemd[1]: Running in initrd. Dec 13 01:58:46.063134 systemd[1]: No hostname configured, using default hostname. Dec 13 01:58:46.063141 systemd[1]: Hostname set to . Dec 13 01:58:46.063148 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:58:46.063156 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:58:46.063163 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:58:46.063170 systemd[1]: Reached target cryptsetup.target. Dec 13 01:58:46.063179 systemd[1]: Reached target paths.target. Dec 13 01:58:46.063193 systemd[1]: Reached target slices.target. Dec 13 01:58:46.063201 systemd[1]: Reached target swap.target. Dec 13 01:58:46.063209 systemd[1]: Reached target timers.target. Dec 13 01:58:46.063217 systemd[1]: Listening on iscsid.socket. Dec 13 01:58:46.063226 systemd[1]: Listening on iscsiuio.socket. Dec 13 01:58:46.063234 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 01:58:46.063241 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 01:58:46.063249 systemd[1]: Listening on systemd-journald.socket. Dec 13 01:58:46.063256 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:58:46.063264 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:58:46.063271 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:58:46.063279 systemd[1]: Reached target sockets.target. Dec 13 01:58:46.063286 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:58:46.063295 systemd[1]: Finished network-cleanup.service. Dec 13 01:58:46.063303 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:58:46.063310 systemd[1]: Starting systemd-journald.service... Dec 13 01:58:46.063318 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:58:46.063325 systemd[1]: Starting systemd-resolved.service... Dec 13 01:58:46.063333 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 01:58:46.063340 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:58:46.063348 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:58:46.063355 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 01:58:46.063364 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 01:58:46.063375 systemd-journald[198]: Journal started Dec 13 01:58:46.063432 systemd-journald[198]: Runtime Journal (/run/log/journal/054f199c3827471eb2f8bdaaf26c2df9) is 6.0M, max 48.5M, 42.5M free. Dec 13 01:58:46.049282 systemd-modules-load[199]: Inserted module 'overlay' Dec 13 01:58:46.120724 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:58:46.120752 kernel: Bridge firewalling registered Dec 13 01:58:46.120765 systemd[1]: Started systemd-journald.service. Dec 13 01:58:46.120780 kernel: audit: type=1130 audit(1734055126.099:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.120793 kernel: audit: type=1130 audit(1734055126.105:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.120806 kernel: audit: type=1130 audit(1734055126.108:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.120820 kernel: audit: type=1130 audit(1734055126.112:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.074998 systemd-resolved[200]: Positive Trust Anchors: Dec 13 01:58:46.122774 kernel: SCSI subsystem initialized Dec 13 01:58:46.075007 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:58:46.075038 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:58:46.077340 systemd-resolved[200]: Defaulting to hostname 'linux'. Dec 13 01:58:46.088354 systemd-modules-load[199]: Inserted module 'br_netfilter' Dec 13 01:58:46.105685 systemd[1]: Started systemd-resolved.service. Dec 13 01:58:46.139518 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:58:46.139533 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:58:46.139543 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 01:58:46.139552 kernel: audit: type=1130 audit(1734055126.139:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.109475 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 01:58:46.113089 systemd[1]: Reached target nss-lookup.target. Dec 13 01:58:46.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.117855 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 01:58:46.150810 kernel: audit: type=1130 audit(1734055126.145:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.133886 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 01:58:46.142953 systemd[1]: Starting dracut-cmdline.service... Dec 13 01:58:46.153753 dracut-cmdline[216]: dracut-dracut-053 Dec 13 01:58:46.143865 systemd-modules-load[199]: Inserted module 'dm_multipath' Dec 13 01:58:46.159601 kernel: audit: type=1130 audit(1734055126.154:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.159668 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:58:46.145250 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:58:46.147183 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:58:46.154706 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:58:46.207999 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:58:46.224993 kernel: iscsi: registered transport (tcp) Dec 13 01:58:46.246126 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:58:46.246188 kernel: QLogic iSCSI HBA Driver Dec 13 01:58:46.266280 systemd[1]: Finished dracut-cmdline.service. Dec 13 01:58:46.271739 kernel: audit: type=1130 audit(1734055126.266:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.267839 systemd[1]: Starting dracut-pre-udev.service... Dec 13 01:58:46.310994 kernel: raid6: avx2x4 gen() 30852 MB/s Dec 13 01:58:46.327996 kernel: raid6: avx2x4 xor() 7806 MB/s Dec 13 01:58:46.344992 kernel: raid6: avx2x2 gen() 31879 MB/s Dec 13 01:58:46.361992 kernel: raid6: avx2x2 xor() 19248 MB/s Dec 13 01:58:46.379006 kernel: raid6: avx2x1 gen() 26408 MB/s Dec 13 01:58:46.395999 kernel: raid6: avx2x1 xor() 15238 MB/s Dec 13 01:58:46.412995 kernel: raid6: sse2x4 gen() 14718 MB/s Dec 13 01:58:46.429995 kernel: raid6: sse2x4 xor() 7396 MB/s Dec 13 01:58:46.446994 kernel: raid6: sse2x2 gen() 15950 MB/s Dec 13 01:58:46.463994 kernel: raid6: sse2x2 xor() 9730 MB/s Dec 13 01:58:46.480995 kernel: raid6: sse2x1 gen() 11816 MB/s Dec 13 01:58:46.498401 kernel: raid6: sse2x1 xor() 7761 MB/s Dec 13 01:58:46.498421 kernel: raid6: using algorithm avx2x2 gen() 31879 MB/s Dec 13 01:58:46.498431 kernel: raid6: .... xor() 19248 MB/s, rmw enabled Dec 13 01:58:46.499127 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:58:46.510997 kernel: xor: automatically using best checksumming function avx Dec 13 01:58:46.599001 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 01:58:46.605658 systemd[1]: Finished dracut-pre-udev.service. Dec 13 01:58:46.610391 kernel: audit: type=1130 audit(1734055126.605:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.609000 audit: BPF prog-id=7 op=LOAD Dec 13 01:58:46.609000 audit: BPF prog-id=8 op=LOAD Dec 13 01:58:46.610636 systemd[1]: Starting systemd-udevd.service... Dec 13 01:58:46.621849 systemd-udevd[399]: Using default interface naming scheme 'v252'. Dec 13 01:58:46.625486 systemd[1]: Started systemd-udevd.service. Dec 13 01:58:46.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.627065 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 01:58:46.637126 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Dec 13 01:58:46.657262 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 01:58:46.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.659496 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:58:46.691710 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:58:46.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.719825 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:58:46.727885 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:58:46.727908 kernel: GPT:9289727 != 19775487 Dec 13 01:58:46.727919 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:58:46.727931 kernel: GPT:9289727 != 19775487 Dec 13 01:58:46.727942 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:58:46.727954 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:58:46.727986 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:58:46.747771 kernel: libata version 3.00 loaded. Dec 13 01:58:46.769999 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (458) Dec 13 01:58:46.770056 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:58:46.784047 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:58:46.784064 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:58:46.784150 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:58:46.784223 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:58:46.784232 kernel: AES CTR mode by8 optimization enabled Dec 13 01:58:46.784245 kernel: scsi host0: ahci Dec 13 01:58:46.784332 kernel: scsi host1: ahci Dec 13 01:58:46.784415 kernel: scsi host2: ahci Dec 13 01:58:46.784509 kernel: scsi host3: ahci Dec 13 01:58:46.784588 kernel: scsi host4: ahci Dec 13 01:58:46.784680 kernel: scsi host5: ahci Dec 13 01:58:46.784759 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 01:58:46.784769 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 01:58:46.784778 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 01:58:46.784789 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 01:58:46.784797 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 01:58:46.784806 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 01:58:46.772569 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 01:58:46.810714 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 01:58:46.815906 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 01:58:46.818928 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 01:58:46.821782 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:58:46.823517 systemd[1]: Starting disk-uuid.service... Dec 13 01:58:46.832677 disk-uuid[535]: Primary Header is updated. Dec 13 01:58:46.832677 disk-uuid[535]: Secondary Entries is updated. Dec 13 01:58:46.832677 disk-uuid[535]: Secondary Header is updated. Dec 13 01:58:46.836992 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:58:46.839984 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:58:47.095015 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:58:47.095098 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:58:47.095110 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:58:47.095121 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:58:47.096994 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:58:47.097023 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:58:47.098525 kernel: ata3.00: applying bridge limits Dec 13 01:58:47.098988 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:58:47.099993 kernel: ata3.00: configured for UDMA/100 Dec 13 01:58:47.100994 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:58:47.133991 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:58:47.150526 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:58:47.150538 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:58:47.841300 disk-uuid[536]: The operation has completed successfully. Dec 13 01:58:47.842858 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:58:47.863963 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:58:47.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:47.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:47.864086 systemd[1]: Finished disk-uuid.service. Dec 13 01:58:47.869158 systemd[1]: Starting verity-setup.service... Dec 13 01:58:47.880995 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:58:47.898031 systemd[1]: Found device dev-mapper-usr.device. Dec 13 01:58:47.900285 systemd[1]: Mounting sysusr-usr.mount... Dec 13 01:58:47.902698 systemd[1]: Finished verity-setup.service. Dec 13 01:58:47.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:47.966991 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 01:58:47.967030 systemd[1]: Mounted sysusr-usr.mount. Dec 13 01:58:47.967179 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 01:58:47.967783 systemd[1]: Starting ignition-setup.service... Dec 13 01:58:47.969623 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 01:58:47.980368 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:58:47.980405 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:58:47.980419 kernel: BTRFS info (device vda6): has skinny extents Dec 13 01:58:47.988437 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:58:47.996380 systemd[1]: Finished ignition-setup.service. Dec 13 01:58:47.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:47.997178 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 01:58:48.031527 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 01:58:48.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:48.032000 audit: BPF prog-id=9 op=LOAD Dec 13 01:58:48.034019 systemd[1]: Starting systemd-networkd.service... Dec 13 01:58:48.081046 systemd-networkd[726]: lo: Link UP Dec 13 01:58:48.081056 systemd-networkd[726]: lo: Gained carrier Dec 13 01:58:48.083137 systemd-networkd[726]: Enumeration completed Dec 13 01:58:48.083326 systemd[1]: Started systemd-networkd.service. Dec 13 01:58:48.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:48.084430 systemd[1]: Reached target network.target. Dec 13 01:58:48.086560 ignition[665]: Ignition 2.14.0 Dec 13 01:58:48.086577 ignition[665]: Stage: fetch-offline Dec 13 01:58:48.087328 systemd[1]: Starting iscsiuio.service... Dec 13 01:58:48.086623 ignition[665]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:58:48.091030 systemd-networkd[726]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:58:48.086633 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:58:48.092241 systemd-networkd[726]: eth0: Link UP Dec 13 01:58:48.086740 ignition[665]: parsed url from cmdline: "" Dec 13 01:58:48.092244 systemd-networkd[726]: eth0: Gained carrier Dec 13 01:58:48.086743 ignition[665]: no config URL provided Dec 13 01:58:48.086748 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:58:48.086754 ignition[665]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:58:48.086769 ignition[665]: op(1): [started] loading QEMU firmware config module Dec 13 01:58:48.086776 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:58:48.097295 ignition[665]: op(1): [finished] loading QEMU firmware config module Dec 13 01:58:48.099343 ignition[665]: parsing config with SHA512: 8b03c5885b13b0715d3952868b28c5885b2945811fc2cb39978695ec6b066ecf46d34981bb565ddb5cb223a6f0577e76fd8954fa8cf22931d5100056a5ff78fe Dec 13 01:58:48.112821 unknown[665]: fetched base config from "system" Dec 13 01:58:48.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:48.113228 ignition[665]: fetch-offline: fetch-offline passed Dec 13 01:58:48.112834 unknown[665]: fetched user config from "qemu" Dec 13 01:58:48.113311 ignition[665]: Ignition finished successfully Dec 13 01:58:48.114510 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 01:58:48.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:48.115559 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:58:48.116233 systemd[1]: Starting ignition-kargs.service... Dec 13 01:58:48.119728 systemd[1]: Started iscsiuio.service. Dec 13 01:58:48.125582 iscsid[734]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:58:48.125582 iscsid[734]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 01:58:48.125582 iscsid[734]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 01:58:48.125582 iscsid[734]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 01:58:48.125582 iscsid[734]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:58:48.125582 iscsid[734]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 01:58:48.121321 systemd[1]: Starting iscsid.service... Dec 13 01:58:48.143234 systemd[1]: Started iscsid.service. Dec 13 01:58:48.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:48.145407 systemd[1]: Starting dracut-initqueue.service... Dec 13 01:58:48.150070 systemd-networkd[726]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:58:48.156088 ignition[732]: Ignition 2.14.0 Dec 13 01:58:48.156095 ignition[732]: Stage: kargs Dec 13 01:58:48.156200 ignition[732]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:58:48.156208 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:58:48.158928 systemd[1]: Finished ignition-kargs.service. Dec 13 01:58:48.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:48.157421 ignition[732]: kargs: kargs passed Dec 13 01:58:48.161459 systemd[1]: Starting ignition-disks.service... Dec 13 01:58:48.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:48.157483 ignition[732]: Ignition finished successfully Dec 13 01:58:48.162382 systemd[1]: Finished dracut-initqueue.service. Dec 13 01:58:48.163835 systemd[1]: Reached target remote-fs-pre.target. Dec 13 01:58:48.165398 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:58:48.166288 systemd[1]: Reached target remote-fs.target. Dec 13 01:58:48.167739 systemd[1]: Starting dracut-pre-mount.service... Dec 13 01:58:48.176080 systemd[1]: Finished dracut-pre-mount.service. Dec 13 01:58:48.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:48.180027 ignition[748]: Ignition 2.14.0 Dec 13 01:58:48.180036 ignition[748]: Stage: disks Dec 13 01:58:48.180124 ignition[748]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:58:48.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:48.181437 systemd[1]: Finished ignition-disks.service. Dec 13 01:58:48.180132 ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:58:48.182429 systemd[1]: Reached target initrd-root-device.target. Dec 13 01:58:48.180804 ignition[748]: disks: disks passed Dec 13 01:58:48.184331 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:58:48.180834 ignition[748]: Ignition finished successfully Dec 13 01:58:48.185248 systemd[1]: Reached target local-fs.target. Dec 13 01:58:48.186114 systemd[1]: Reached target sysinit.target. Dec 13 01:58:48.187803 systemd[1]: Reached target basic.target. Dec 13 01:58:48.189476 systemd[1]: Starting systemd-fsck-root.service... Dec 13 01:58:48.206282 systemd-fsck[761]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 01:58:48.211597 systemd[1]: Finished systemd-fsck-root.service. Dec 13 01:58:48.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:48.213242 systemd[1]: Mounting sysroot.mount... Dec 13 01:58:48.218989 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 01:58:48.219445 systemd[1]: Mounted sysroot.mount. Dec 13 01:58:48.220194 systemd[1]: Reached target initrd-root-fs.target. Dec 13 01:58:48.222425 systemd[1]: Mounting sysroot-usr.mount... Dec 13 01:58:48.223385 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 01:58:48.223411 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:58:48.223428 systemd[1]: Reached target ignition-diskful.target. Dec 13 01:58:48.225414 systemd[1]: Mounted sysroot-usr.mount. Dec 13 01:58:48.227467 systemd[1]: Starting initrd-setup-root.service... Dec 13 01:58:48.232089 initrd-setup-root[771]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:58:48.233808 initrd-setup-root[779]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:58:48.236342 initrd-setup-root[787]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:58:48.238429 initrd-setup-root[795]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:58:48.260687 systemd[1]: Finished initrd-setup-root.service. Dec 13 01:58:48.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:48.262200 systemd[1]: Starting ignition-mount.service... Dec 13 01:58:48.263480 systemd[1]: Starting sysroot-boot.service... Dec 13 01:58:48.266752 bash[812]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 01:58:48.306669 systemd[1]: Finished sysroot-boot.service. Dec 13 01:58:48.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:48.309627 ignition[813]: INFO : Ignition 2.14.0 Dec 13 01:58:48.309627 ignition[813]: INFO : Stage: mount Dec 13 01:58:48.312045 ignition[813]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:58:48.312045 ignition[813]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:58:48.312045 ignition[813]: INFO : mount: mount passed Dec 13 01:58:48.312045 ignition[813]: INFO : Ignition finished successfully Dec 13 01:58:48.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:48.310937 systemd[1]: Finished ignition-mount.service. Dec 13 01:58:48.430409 systemd-resolved[200]: Detected conflict on linux IN A 10.0.0.55 Dec 13 01:58:48.430423 systemd-resolved[200]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Dec 13 01:58:48.913525 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 01:58:48.920716 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (822) Dec 13 01:58:48.920752 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:58:48.920763 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:58:48.922335 kernel: BTRFS info (device vda6): has skinny extents Dec 13 01:58:48.925583 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 01:58:48.927988 systemd[1]: Starting ignition-files.service... Dec 13 01:58:48.940849 ignition[842]: INFO : Ignition 2.14.0 Dec 13 01:58:48.940849 ignition[842]: INFO : Stage: files Dec 13 01:58:48.942558 ignition[842]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:58:48.942558 ignition[842]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:58:48.945244 ignition[842]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:58:48.946550 ignition[842]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:58:48.946550 ignition[842]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:58:48.949947 ignition[842]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:58:48.951366 ignition[842]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:58:48.953061 unknown[842]: wrote ssh authorized keys file for user: core Dec 13 01:58:48.954173 ignition[842]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:58:48.955662 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:58:48.957462 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:58:48.959208 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:58:48.960957 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:58:48.962713 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:58:48.965156 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:58:48.965156 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:58:48.970754 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:58:49.317654 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 01:58:49.414102 systemd-networkd[726]: eth0: Gained IPv6LL Dec 13 01:58:50.046573 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:58:50.046573 ignition[842]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Dec 13 01:58:50.051237 ignition[842]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:58:50.051237 ignition[842]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:58:50.051237 ignition[842]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Dec 13 01:58:50.051237 ignition[842]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:58:50.051237 ignition[842]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:58:50.084164 ignition[842]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:58:50.086059 ignition[842]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:58:50.086059 ignition[842]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:58:50.086059 ignition[842]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:58:50.086059 ignition[842]: INFO : files: files passed Dec 13 01:58:50.086059 ignition[842]: INFO : Ignition finished successfully Dec 13 01:58:50.094064 systemd[1]: Finished ignition-files.service. Dec 13 01:58:50.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.096188 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 01:58:50.096286 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 01:58:50.096939 systemd[1]: Starting ignition-quench.service... Dec 13 01:58:50.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.101559 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:58:50.106296 initrd-setup-root-after-ignition[867]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 01:58:50.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.101642 systemd[1]: Finished ignition-quench.service. Dec 13 01:58:50.110728 initrd-setup-root-after-ignition[869]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:58:50.103330 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 01:58:50.106409 systemd[1]: Reached target ignition-complete.target. Dec 13 01:58:50.109777 systemd[1]: Starting initrd-parse-etc.service... Dec 13 01:58:50.123669 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:58:50.123748 systemd[1]: Finished initrd-parse-etc.service. Dec 13 01:58:50.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.125823 systemd[1]: Reached target initrd-fs.target. Dec 13 01:58:50.127434 systemd[1]: Reached target initrd.target. Dec 13 01:58:50.128227 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 01:58:50.128787 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 01:58:50.137907 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 01:58:50.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.139369 systemd[1]: Starting initrd-cleanup.service... Dec 13 01:58:50.147488 systemd[1]: Stopped target nss-lookup.target. Dec 13 01:58:50.148421 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 01:58:50.150098 systemd[1]: Stopped target timers.target. Dec 13 01:58:50.151729 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:58:50.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.151816 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 01:58:50.153379 systemd[1]: Stopped target initrd.target. Dec 13 01:58:50.155051 systemd[1]: Stopped target basic.target. Dec 13 01:58:50.156626 systemd[1]: Stopped target ignition-complete.target. Dec 13 01:58:50.158260 systemd[1]: Stopped target ignition-diskful.target. Dec 13 01:58:50.159881 systemd[1]: Stopped target initrd-root-device.target. Dec 13 01:58:50.161667 systemd[1]: Stopped target remote-fs.target. Dec 13 01:58:50.163339 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 01:58:50.165126 systemd[1]: Stopped target sysinit.target. Dec 13 01:58:50.166700 systemd[1]: Stopped target local-fs.target. Dec 13 01:58:50.168319 systemd[1]: Stopped target local-fs-pre.target. Dec 13 01:58:50.169922 systemd[1]: Stopped target swap.target. Dec 13 01:58:50.178244 kernel: kauditd_printk_skb: 31 callbacks suppressed Dec 13 01:58:50.178265 kernel: audit: type=1131 audit(1734055130.172:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.171389 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:58:50.171486 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 01:58:50.184808 kernel: audit: type=1131 audit(1734055130.180:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.173089 systemd[1]: Stopped target cryptsetup.target. Dec 13 01:58:50.189431 kernel: audit: type=1131 audit(1734055130.184:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.178281 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:58:50.178360 systemd[1]: Stopped dracut-initqueue.service. Dec 13 01:58:50.180187 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:58:50.180267 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 01:58:50.184909 systemd[1]: Stopped target paths.target. Dec 13 01:58:50.189480 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:58:50.193042 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 01:58:50.194302 systemd[1]: Stopped target slices.target. Dec 13 01:58:50.204153 kernel: audit: type=1131 audit(1734055130.198:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.195782 systemd[1]: Stopped target sockets.target. Dec 13 01:58:50.209048 kernel: audit: type=1131 audit(1734055130.204:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.197660 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:58:50.210983 iscsid[734]: iscsid shutting down. Dec 13 01:58:50.197796 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 01:58:50.212865 ignition[882]: INFO : Ignition 2.14.0 Dec 13 01:58:50.212865 ignition[882]: INFO : Stage: umount Dec 13 01:58:50.212865 ignition[882]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:58:50.212865 ignition[882]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:58:50.212865 ignition[882]: INFO : umount: umount passed Dec 13 01:58:50.212865 ignition[882]: INFO : Ignition finished successfully Dec 13 01:58:50.235648 kernel: audit: type=1131 audit(1734055130.215:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.235667 kernel: audit: type=1131 audit(1734055130.219:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.235678 kernel: audit: type=1131 audit(1734055130.225:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.235694 kernel: audit: type=1131 audit(1734055130.230:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.199559 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:58:50.242052 kernel: audit: type=1131 audit(1734055130.236:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.199698 systemd[1]: Stopped ignition-files.service. Dec 13 01:58:50.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.205337 systemd[1]: Stopping ignition-mount.service... Dec 13 01:58:50.209433 systemd[1]: Stopping iscsid.service... Dec 13 01:58:50.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.212122 systemd[1]: Stopping sysroot-boot.service... Dec 13 01:58:50.214402 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:58:50.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.214606 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 01:58:50.216141 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:58:50.216343 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 01:58:50.223675 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 01:58:50.223801 systemd[1]: Stopped iscsid.service. Dec 13 01:58:50.226600 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:58:50.226695 systemd[1]: Stopped ignition-mount.service. Dec 13 01:58:50.231687 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:58:50.232211 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:58:50.232280 systemd[1]: Closed iscsid.socket. Dec 13 01:58:50.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.235688 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:58:50.235740 systemd[1]: Stopped ignition-disks.service. Dec 13 01:58:50.236617 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:58:50.236647 systemd[1]: Stopped ignition-kargs.service. Dec 13 01:58:50.242097 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:58:50.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.242133 systemd[1]: Stopped ignition-setup.service. Dec 13 01:58:50.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.243084 systemd[1]: Stopping iscsiuio.service... Dec 13 01:58:50.244824 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:58:50.244895 systemd[1]: Finished initrd-cleanup.service. Dec 13 01:58:50.247141 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 01:58:50.247219 systemd[1]: Stopped iscsiuio.service. Dec 13 01:58:50.248678 systemd[1]: Stopped target network.target. Dec 13 01:58:50.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.250418 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:58:50.250447 systemd[1]: Closed iscsiuio.socket. Dec 13 01:58:50.252056 systemd[1]: Stopping systemd-networkd.service... Dec 13 01:58:50.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.282000 audit: BPF prog-id=6 op=UNLOAD Dec 13 01:58:50.253821 systemd[1]: Stopping systemd-resolved.service... Dec 13 01:58:50.259016 systemd-networkd[726]: eth0: DHCPv6 lease lost Dec 13 01:58:50.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.283000 audit: BPF prog-id=9 op=UNLOAD Dec 13 01:58:50.260348 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:58:50.260424 systemd[1]: Stopped systemd-networkd.service. Dec 13 01:58:50.264064 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:58:50.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.264091 systemd[1]: Closed systemd-networkd.socket. Dec 13 01:58:50.266354 systemd[1]: Stopping network-cleanup.service... Dec 13 01:58:50.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.267297 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:58:50.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.267339 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 01:58:50.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.269150 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:58:50.269181 systemd[1]: Stopped systemd-sysctl.service. Dec 13 01:58:50.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.271137 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:58:50.271168 systemd[1]: Stopped systemd-modules-load.service. Dec 13 01:58:50.272838 systemd[1]: Stopping systemd-udevd.service... Dec 13 01:58:50.276452 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 01:58:50.276927 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:58:50.277063 systemd[1]: Stopped systemd-resolved.service. Dec 13 01:58:50.280750 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:58:50.280888 systemd[1]: Stopped systemd-udevd.service. Dec 13 01:58:50.283358 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:58:50.283430 systemd[1]: Stopped network-cleanup.service. Dec 13 01:58:50.285103 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:58:50.285145 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 01:58:50.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.286758 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:58:50.286790 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 01:58:50.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.288647 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:58:50.288682 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 01:58:50.290400 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:58:50.290432 systemd[1]: Stopped dracut-cmdline.service. Dec 13 01:58:50.292161 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:58:50.292191 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 01:58:50.294431 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 01:58:50.295443 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:58:50.295485 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 01:58:50.296471 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:58:50.296503 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 01:58:50.298261 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:58:50.298293 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 01:58:50.299938 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 01:58:50.300335 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:58:50.300400 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 01:58:50.312459 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:58:50.312553 systemd[1]: Stopped sysroot-boot.service. Dec 13 01:58:50.314465 systemd[1]: Reached target initrd-switch-root.target. Dec 13 01:58:50.315999 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:58:50.316038 systemd[1]: Stopped initrd-setup-root.service. Dec 13 01:58:50.318521 systemd[1]: Starting initrd-switch-root.service... Dec 13 01:58:50.334894 systemd[1]: Switching root. Dec 13 01:58:50.352560 systemd-journald[198]: Journal stopped Dec 13 01:58:54.263696 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Dec 13 01:58:54.263759 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 01:58:54.263781 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 01:58:54.263796 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 01:58:54.263810 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:58:54.263823 kernel: SELinux: policy capability open_perms=1 Dec 13 01:58:54.263838 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:58:54.263854 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:58:54.263868 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:58:54.263882 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:58:54.263901 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:58:54.263914 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:58:54.263929 systemd[1]: Successfully loaded SELinux policy in 38.944ms. Dec 13 01:58:54.263950 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.882ms. Dec 13 01:58:54.263979 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:58:54.264004 systemd[1]: Detected virtualization kvm. Dec 13 01:58:54.264020 systemd[1]: Detected architecture x86-64. Dec 13 01:58:54.264034 systemd[1]: Detected first boot. Dec 13 01:58:54.264048 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:58:54.264063 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 01:58:54.264078 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:58:54.264093 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:58:54.264113 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:58:54.265131 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:58:54.265149 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:58:54.265164 systemd[1]: Stopped initrd-switch-root.service. Dec 13 01:58:54.265179 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:58:54.265198 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 01:58:54.265213 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 01:58:54.265227 systemd[1]: Created slice system-getty.slice. Dec 13 01:58:54.265242 systemd[1]: Created slice system-modprobe.slice. Dec 13 01:58:54.265259 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 01:58:54.265274 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 01:58:54.265289 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 01:58:54.265303 systemd[1]: Created slice user.slice. Dec 13 01:58:54.265318 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:58:54.265335 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 01:58:54.265353 systemd[1]: Set up automount boot.automount. Dec 13 01:58:54.265369 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 01:58:54.265383 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 01:58:54.265409 systemd[1]: Stopped target initrd-fs.target. Dec 13 01:58:54.265425 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 01:58:54.265439 systemd[1]: Reached target integritysetup.target. Dec 13 01:58:54.265459 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:58:54.265476 systemd[1]: Reached target remote-fs.target. Dec 13 01:58:54.265491 systemd[1]: Reached target slices.target. Dec 13 01:58:54.265505 systemd[1]: Reached target swap.target. Dec 13 01:58:54.265519 systemd[1]: Reached target torcx.target. Dec 13 01:58:54.265534 systemd[1]: Reached target veritysetup.target. Dec 13 01:58:54.265549 systemd[1]: Listening on systemd-coredump.socket. Dec 13 01:58:54.265564 systemd[1]: Listening on systemd-initctl.socket. Dec 13 01:58:54.265578 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:58:54.265592 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:58:54.265607 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:58:54.265629 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 01:58:54.265644 systemd[1]: Mounting dev-hugepages.mount... Dec 13 01:58:54.265659 systemd[1]: Mounting dev-mqueue.mount... Dec 13 01:58:54.265674 systemd[1]: Mounting media.mount... Dec 13 01:58:54.265689 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:58:54.265704 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 01:58:54.265718 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 01:58:54.265732 systemd[1]: Mounting tmp.mount... Dec 13 01:58:54.265747 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 01:58:54.265765 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:58:54.265780 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:58:54.265795 systemd[1]: Starting modprobe@configfs.service... Dec 13 01:58:54.265810 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:58:54.265832 systemd[1]: Starting modprobe@drm.service... Dec 13 01:58:54.265847 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:58:54.265861 systemd[1]: Starting modprobe@fuse.service... Dec 13 01:58:54.265875 systemd[1]: Starting modprobe@loop.service... Dec 13 01:58:54.265890 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:58:54.265908 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:58:54.265923 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 01:58:54.265937 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:58:54.265951 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:58:54.265981 systemd[1]: Stopped systemd-journald.service. Dec 13 01:58:54.265997 systemd[1]: Starting systemd-journald.service... Dec 13 01:58:54.266011 kernel: loop: module loaded Dec 13 01:58:54.267035 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:58:54.267058 kernel: fuse: init (API version 7.34) Dec 13 01:58:54.267077 systemd[1]: Starting systemd-network-generator.service... Dec 13 01:58:54.267093 systemd[1]: Starting systemd-remount-fs.service... Dec 13 01:58:54.267109 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:58:54.267124 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:58:54.267139 systemd[1]: Stopped verity-setup.service. Dec 13 01:58:54.267154 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:58:54.267192 systemd[1]: Mounted dev-hugepages.mount. Dec 13 01:58:54.267207 systemd[1]: Mounted dev-mqueue.mount. Dec 13 01:58:54.267221 systemd[1]: Mounted media.mount. Dec 13 01:58:54.267239 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 01:58:54.267254 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 01:58:54.267269 systemd[1]: Mounted tmp.mount. Dec 13 01:58:54.267288 systemd-journald[976]: Journal started Dec 13 01:58:54.267336 systemd-journald[976]: Runtime Journal (/run/log/journal/054f199c3827471eb2f8bdaaf26c2df9) is 6.0M, max 48.5M, 42.5M free. Dec 13 01:58:50.409000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:58:50.585000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:58:50.585000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:58:50.585000 audit: BPF prog-id=10 op=LOAD Dec 13 01:58:50.585000 audit: BPF prog-id=10 op=UNLOAD Dec 13 01:58:50.585000 audit: BPF prog-id=11 op=LOAD Dec 13 01:58:50.585000 audit: BPF prog-id=11 op=UNLOAD Dec 13 01:58:50.642000 audit[915]: AVC avc: denied { associate } for pid=915 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 01:58:50.642000 audit[915]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=898 pid=915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:50.642000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 01:58:50.643000 audit[915]: AVC avc: denied { associate } for pid=915 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 01:58:50.643000 audit[915]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059b9 a2=1ed a3=0 items=2 ppid=898 pid=915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:50.643000 audit: CWD cwd="/" Dec 13 01:58:50.643000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:50.643000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:50.643000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 01:58:54.076000 audit: BPF prog-id=12 op=LOAD Dec 13 01:58:54.076000 audit: BPF prog-id=3 op=UNLOAD Dec 13 01:58:54.076000 audit: BPF prog-id=13 op=LOAD Dec 13 01:58:54.076000 audit: BPF prog-id=14 op=LOAD Dec 13 01:58:54.076000 audit: BPF prog-id=4 op=UNLOAD Dec 13 01:58:54.076000 audit: BPF prog-id=5 op=UNLOAD Dec 13 01:58:54.076000 audit: BPF prog-id=15 op=LOAD Dec 13 01:58:54.076000 audit: BPF prog-id=12 op=UNLOAD Dec 13 01:58:54.076000 audit: BPF prog-id=16 op=LOAD Dec 13 01:58:54.076000 audit: BPF prog-id=17 op=LOAD Dec 13 01:58:54.076000 audit: BPF prog-id=13 op=UNLOAD Dec 13 01:58:54.076000 audit: BPF prog-id=14 op=UNLOAD Dec 13 01:58:54.077000 audit: BPF prog-id=18 op=LOAD Dec 13 01:58:54.077000 audit: BPF prog-id=15 op=UNLOAD Dec 13 01:58:54.077000 audit: BPF prog-id=19 op=LOAD Dec 13 01:58:54.077000 audit: BPF prog-id=20 op=LOAD Dec 13 01:58:54.077000 audit: BPF prog-id=16 op=UNLOAD Dec 13 01:58:54.077000 audit: BPF prog-id=17 op=UNLOAD Dec 13 01:58:54.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.089000 audit: BPF prog-id=18 op=UNLOAD Dec 13 01:58:54.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.234000 audit: BPF prog-id=21 op=LOAD Dec 13 01:58:54.270393 systemd[1]: Started systemd-journald.service. Dec 13 01:58:54.234000 audit: BPF prog-id=22 op=LOAD Dec 13 01:58:54.234000 audit: BPF prog-id=23 op=LOAD Dec 13 01:58:54.234000 audit: BPF prog-id=19 op=UNLOAD Dec 13 01:58:54.234000 audit: BPF prog-id=20 op=UNLOAD Dec 13 01:58:54.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.259000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 01:58:54.259000 audit[976]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe37e5bf70 a2=4000 a3=7ffe37e5c00c items=0 ppid=1 pid=976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:54.259000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 01:58:54.075254 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:58:50.641573 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:58:54.075267 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 01:58:50.641776 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:50Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 01:58:54.078721 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:58:50.641792 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:50Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 01:58:50.641819 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:50Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 01:58:50.641828 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:50Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 01:58:50.641853 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:50Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 01:58:50.641865 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:50Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 01:58:50.642066 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:50Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 01:58:50.642099 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:50Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 01:58:50.642112 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:50Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 01:58:50.642731 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:50Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 01:58:54.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:50.642765 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:50Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 01:58:50.642780 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:50Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 01:58:50.642793 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:50Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 01:58:54.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.272351 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:58:50.642807 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:50Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 01:58:50.642819 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:50Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 01:58:54.273560 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:58:54.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:53.817043 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:53Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:58:54.273685 systemd[1]: Finished modprobe@configfs.service. Dec 13 01:58:53.817290 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:53Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:58:54.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.274897 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:58:53.817377 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:53Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:58:54.275020 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:58:53.817526 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:53Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:58:54.276245 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:58:53.817577 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:53Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 01:58:53.817629 /usr/lib/systemd/system-generators/torcx-generator[915]: time="2024-12-13T01:58:53Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 01:58:54.276475 systemd[1]: Finished modprobe@drm.service. Dec 13 01:58:54.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.277627 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:58:54.277738 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:58:54.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.278833 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:58:54.278940 systemd[1]: Finished modprobe@fuse.service. Dec 13 01:58:54.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.280002 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:58:54.280118 systemd[1]: Finished modprobe@loop.service. Dec 13 01:58:54.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.281295 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:58:54.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.282540 systemd[1]: Finished systemd-network-generator.service. Dec 13 01:58:54.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.283747 systemd[1]: Finished systemd-remount-fs.service. Dec 13 01:58:54.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.284903 systemd[1]: Reached target network-pre.target. Dec 13 01:58:54.286595 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 01:58:54.288290 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 01:58:54.289170 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:58:54.290340 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 01:58:54.292068 systemd[1]: Starting systemd-journal-flush.service... Dec 13 01:58:54.293366 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:58:54.294196 systemd[1]: Starting systemd-random-seed.service... Dec 13 01:58:54.295175 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:58:54.296087 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:58:54.298210 systemd-journald[976]: Time spent on flushing to /var/log/journal/054f199c3827471eb2f8bdaaf26c2df9 is 17.205ms for 1090 entries. Dec 13 01:58:54.298210 systemd-journald[976]: System Journal (/var/log/journal/054f199c3827471eb2f8bdaaf26c2df9) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:58:54.324438 systemd-journald[976]: Received client request to flush runtime journal. Dec 13 01:58:54.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.298500 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 01:58:54.325191 udevadm[1016]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:58:54.302020 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 01:58:54.306163 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:58:54.308272 systemd[1]: Starting systemd-udev-settle.service... Dec 13 01:58:54.309428 systemd[1]: Finished systemd-random-seed.service. Dec 13 01:58:54.310555 systemd[1]: Reached target first-boot-complete.target. Dec 13 01:58:54.314241 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 01:58:54.316942 systemd[1]: Starting systemd-sysusers.service... Dec 13 01:58:54.320305 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:58:54.325123 systemd[1]: Finished systemd-journal-flush.service. Dec 13 01:58:54.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.332607 systemd[1]: Finished systemd-sysusers.service. Dec 13 01:58:54.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.334551 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 01:58:54.351169 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 01:58:54.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.965419 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 01:58:54.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.966000 audit: BPF prog-id=24 op=LOAD Dec 13 01:58:54.966000 audit: BPF prog-id=25 op=LOAD Dec 13 01:58:54.966000 audit: BPF prog-id=7 op=UNLOAD Dec 13 01:58:54.966000 audit: BPF prog-id=8 op=UNLOAD Dec 13 01:58:54.967925 systemd[1]: Starting systemd-udevd.service... Dec 13 01:58:54.984296 systemd-udevd[1023]: Using default interface naming scheme 'v252'. Dec 13 01:58:54.997530 systemd[1]: Started systemd-udevd.service. Dec 13 01:58:54.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:54.998000 audit: BPF prog-id=26 op=LOAD Dec 13 01:58:55.000650 systemd[1]: Starting systemd-networkd.service... Dec 13 01:58:55.007000 audit: BPF prog-id=27 op=LOAD Dec 13 01:58:55.007000 audit: BPF prog-id=28 op=LOAD Dec 13 01:58:55.008000 audit: BPF prog-id=29 op=LOAD Dec 13 01:58:55.009928 systemd[1]: Starting systemd-userdbd.service... Dec 13 01:58:55.021211 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 01:58:55.045928 systemd[1]: Started systemd-userdbd.service. Dec 13 01:58:55.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.062731 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:58:55.069024 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:58:55.077001 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:58:55.086648 systemd-networkd[1032]: lo: Link UP Dec 13 01:58:55.086656 systemd-networkd[1032]: lo: Gained carrier Dec 13 01:58:55.087007 systemd-networkd[1032]: Enumeration completed Dec 13 01:58:55.087112 systemd[1]: Started systemd-networkd.service. Dec 13 01:58:55.087117 systemd-networkd[1032]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:58:55.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.088957 systemd-networkd[1032]: eth0: Link UP Dec 13 01:58:55.088965 systemd-networkd[1032]: eth0: Gained carrier Dec 13 01:58:55.090000 audit[1027]: AVC avc: denied { confidentiality } for pid=1027 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 01:58:55.113147 systemd-networkd[1032]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:58:55.090000 audit[1027]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b5d3b930e0 a1=337fc a2=7fa7bbb44bc5 a3=5 items=110 ppid=1023 pid=1027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:55.090000 audit: CWD cwd="/" Dec 13 01:58:55.090000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=1 name=(null) inode=12096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=2 name=(null) inode=12096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=3 name=(null) inode=12097 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=4 name=(null) inode=12096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=5 name=(null) inode=12098 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=6 name=(null) inode=12096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=7 name=(null) inode=12099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=8 name=(null) inode=12099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=9 name=(null) inode=12100 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=10 name=(null) inode=12099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=11 name=(null) inode=12101 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=12 name=(null) inode=12099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=13 name=(null) inode=12102 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=14 name=(null) inode=12099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=15 name=(null) inode=12103 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=16 name=(null) inode=12099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=17 name=(null) inode=12104 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=18 name=(null) inode=12096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=19 name=(null) inode=12105 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=20 name=(null) inode=12105 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=21 name=(null) inode=12106 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=22 name=(null) inode=12105 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=23 name=(null) inode=12107 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=24 name=(null) inode=12105 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=25 name=(null) inode=12108 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=26 name=(null) inode=12105 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=27 name=(null) inode=12109 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=28 name=(null) inode=12105 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=29 name=(null) inode=12110 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=30 name=(null) inode=12096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=31 name=(null) inode=12111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=32 name=(null) inode=12111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=33 name=(null) inode=12112 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=34 name=(null) inode=12111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=35 name=(null) inode=12113 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=36 name=(null) inode=12111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=37 name=(null) inode=12114 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=38 name=(null) inode=12111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=39 name=(null) inode=12115 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=40 name=(null) inode=12111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=41 name=(null) inode=12116 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=42 name=(null) inode=12096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=43 name=(null) inode=12117 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=44 name=(null) inode=12117 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=45 name=(null) inode=12118 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=46 name=(null) inode=12117 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=47 name=(null) inode=12119 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=48 name=(null) inode=12117 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=49 name=(null) inode=12120 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=50 name=(null) inode=12117 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=51 name=(null) inode=12121 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=52 name=(null) inode=12117 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=53 name=(null) inode=12122 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=55 name=(null) inode=12123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=56 name=(null) inode=12123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=57 name=(null) inode=12124 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=58 name=(null) inode=12123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=59 name=(null) inode=12125 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=60 name=(null) inode=12123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=61 name=(null) inode=12126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=62 name=(null) inode=12126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=63 name=(null) inode=12127 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=64 name=(null) inode=12126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=65 name=(null) inode=12128 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=66 name=(null) inode=12126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=67 name=(null) inode=12129 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=68 name=(null) inode=12126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=69 name=(null) inode=12130 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=70 name=(null) inode=12126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=71 name=(null) inode=12131 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=72 name=(null) inode=12123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=73 name=(null) inode=12132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=74 name=(null) inode=12132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=75 name=(null) inode=12133 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=76 name=(null) inode=12132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=77 name=(null) inode=12134 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=78 name=(null) inode=12132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=79 name=(null) inode=12135 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=80 name=(null) inode=12132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=81 name=(null) inode=12136 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=82 name=(null) inode=12132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=83 name=(null) inode=12137 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=84 name=(null) inode=12123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=85 name=(null) inode=12138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=86 name=(null) inode=12138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=87 name=(null) inode=12139 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=88 name=(null) inode=12138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=89 name=(null) inode=12140 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=90 name=(null) inode=12138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=91 name=(null) inode=12141 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=92 name=(null) inode=12138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=93 name=(null) inode=12142 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=94 name=(null) inode=12138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=95 name=(null) inode=12143 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=96 name=(null) inode=12123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=97 name=(null) inode=12144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=98 name=(null) inode=12144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=99 name=(null) inode=12145 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=100 name=(null) inode=12144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=101 name=(null) inode=12146 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=102 name=(null) inode=12144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=103 name=(null) inode=12147 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=104 name=(null) inode=12144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=105 name=(null) inode=12148 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=106 name=(null) inode=12144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=107 name=(null) inode=12149 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PATH item=109 name=(null) inode=12150 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:58:55.090000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 01:58:55.133988 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:58:55.134214 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:58:55.134352 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:58:55.140987 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:58:55.149986 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:58:55.191489 kernel: kvm: Nested Virtualization enabled Dec 13 01:58:55.191581 kernel: SVM: kvm: Nested Paging enabled Dec 13 01:58:55.191600 kernel: SVM: Virtual VMLOAD VMSAVE supported Dec 13 01:58:55.191618 kernel: SVM: Virtual GIF supported Dec 13 01:58:55.211014 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:58:55.239308 systemd[1]: Finished systemd-udev-settle.service. Dec 13 01:58:55.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.241137 kernel: kauditd_printk_skb: 225 callbacks suppressed Dec 13 01:58:55.241186 kernel: audit: type=1130 audit(1734055135.239:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.241345 systemd[1]: Starting lvm2-activation-early.service... Dec 13 01:58:55.249956 lvm[1058]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:58:55.321021 systemd[1]: Finished lvm2-activation-early.service. Dec 13 01:58:55.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.322384 systemd[1]: Reached target cryptsetup.target. Dec 13 01:58:55.325991 kernel: audit: type=1130 audit(1734055135.321:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.327840 systemd[1]: Starting lvm2-activation.service... Dec 13 01:58:55.331074 lvm[1059]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:58:55.359705 systemd[1]: Finished lvm2-activation.service. Dec 13 01:58:55.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.360945 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:58:55.364991 kernel: audit: type=1130 audit(1734055135.359:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.365584 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:58:55.365618 systemd[1]: Reached target local-fs.target. Dec 13 01:58:55.366785 systemd[1]: Reached target machines.target. Dec 13 01:58:55.368791 systemd[1]: Starting ldconfig.service... Dec 13 01:58:55.370195 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:58:55.370247 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:58:55.371658 systemd[1]: Starting systemd-boot-update.service... Dec 13 01:58:55.373523 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 01:58:55.375609 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 01:58:55.378033 systemd[1]: Starting systemd-sysext.service... Dec 13 01:58:55.379221 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1061 (bootctl) Dec 13 01:58:55.380105 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 01:58:55.389466 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 01:58:55.426191 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 01:58:55.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.426401 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 01:58:55.444165 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 01:58:55.450998 kernel: audit: type=1130 audit(1734055135.443:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.452990 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 01:58:55.662000 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:58:55.662857 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:58:55.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.663438 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 01:58:55.668005 kernel: audit: type=1130 audit(1734055135.662:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.672926 systemd-fsck[1071]: fsck.fat 4.2 (2021-01-31) Dec 13 01:58:55.672926 systemd-fsck[1071]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 01:58:55.674544 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 01:58:55.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.678823 systemd[1]: Mounting boot.mount... Dec 13 01:58:55.681021 kernel: audit: type=1130 audit(1734055135.676:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.683985 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 01:58:55.686248 systemd[1]: Mounted boot.mount. Dec 13 01:58:55.688246 (sd-sysext)[1075]: Using extensions 'kubernetes'. Dec 13 01:58:55.688534 (sd-sysext)[1075]: Merged extensions into '/usr'. Dec 13 01:58:55.703055 systemd[1]: Finished systemd-boot-update.service. Dec 13 01:58:55.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.704488 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:58:55.705751 systemd[1]: Mounting usr-share-oem.mount... Dec 13 01:58:55.708986 kernel: audit: type=1130 audit(1734055135.704:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.710434 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:58:55.711822 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:58:55.713882 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:58:55.715743 systemd[1]: Starting modprobe@loop.service... Dec 13 01:58:55.716742 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:58:55.716845 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:58:55.716945 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:58:55.719698 systemd[1]: Mounted usr-share-oem.mount. Dec 13 01:58:55.721259 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:58:55.721411 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:58:55.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.723148 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:58:55.723288 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:58:55.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.729823 kernel: audit: type=1130 audit(1734055135.722:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.729887 kernel: audit: type=1131 audit(1734055135.722:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.731206 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:58:55.731327 systemd[1]: Finished modprobe@loop.service. Dec 13 01:58:55.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.735719 kernel: audit: type=1130 audit(1734055135.730:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.735837 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:58:55.735927 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:58:55.736800 systemd[1]: Finished systemd-sysext.service. Dec 13 01:58:55.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.738813 systemd[1]: Starting ensure-sysext.service... Dec 13 01:58:55.740526 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 01:58:55.747039 systemd[1]: Reloading. Dec 13 01:58:55.754109 ldconfig[1060]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:58:55.749488 systemd-tmpfiles[1082]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 01:58:55.750563 systemd-tmpfiles[1082]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:58:55.751893 systemd-tmpfiles[1082]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:58:55.833578 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-12-13T01:58:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:58:55.833913 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-12-13T01:58:55Z" level=info msg="torcx already run" Dec 13 01:58:55.878719 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:58:55.878736 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:58:55.898370 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:58:55.948000 audit: BPF prog-id=30 op=LOAD Dec 13 01:58:55.948000 audit: BPF prog-id=21 op=UNLOAD Dec 13 01:58:55.948000 audit: BPF prog-id=31 op=LOAD Dec 13 01:58:55.948000 audit: BPF prog-id=32 op=LOAD Dec 13 01:58:55.948000 audit: BPF prog-id=22 op=UNLOAD Dec 13 01:58:55.948000 audit: BPF prog-id=23 op=UNLOAD Dec 13 01:58:55.949000 audit: BPF prog-id=33 op=LOAD Dec 13 01:58:55.949000 audit: BPF prog-id=26 op=UNLOAD Dec 13 01:58:55.950000 audit: BPF prog-id=34 op=LOAD Dec 13 01:58:55.950000 audit: BPF prog-id=35 op=LOAD Dec 13 01:58:55.950000 audit: BPF prog-id=24 op=UNLOAD Dec 13 01:58:55.950000 audit: BPF prog-id=25 op=UNLOAD Dec 13 01:58:55.951000 audit: BPF prog-id=36 op=LOAD Dec 13 01:58:55.951000 audit: BPF prog-id=27 op=UNLOAD Dec 13 01:58:55.951000 audit: BPF prog-id=37 op=LOAD Dec 13 01:58:55.951000 audit: BPF prog-id=38 op=LOAD Dec 13 01:58:55.951000 audit: BPF prog-id=28 op=UNLOAD Dec 13 01:58:55.951000 audit: BPF prog-id=29 op=UNLOAD Dec 13 01:58:55.955608 systemd[1]: Finished ldconfig.service. Dec 13 01:58:55.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.957417 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 01:58:55.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.960746 systemd[1]: Starting audit-rules.service... Dec 13 01:58:55.962506 systemd[1]: Starting clean-ca-certificates.service... Dec 13 01:58:55.964394 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 01:58:55.965000 audit: BPF prog-id=39 op=LOAD Dec 13 01:58:55.967567 systemd[1]: Starting systemd-resolved.service... Dec 13 01:58:55.968000 audit: BPF prog-id=40 op=LOAD Dec 13 01:58:55.970191 systemd[1]: Starting systemd-timesyncd.service... Dec 13 01:58:55.972392 systemd[1]: Starting systemd-update-utmp.service... Dec 13 01:58:55.973735 systemd[1]: Finished clean-ca-certificates.service. Dec 13 01:58:55.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.977236 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:58:55.976000 audit[1154]: SYSTEM_BOOT pid=1154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.981281 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:58:55.983067 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:58:55.985953 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:58:55.988698 systemd[1]: Starting modprobe@loop.service... Dec 13 01:58:55.989510 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:58:55.989729 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:58:55.989939 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:58:55.991597 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:58:55.991778 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:58:55.993610 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:58:55.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.993765 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:58:55.995506 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:58:55.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.995644 systemd[1]: Finished modprobe@loop.service. Dec 13 01:58:55.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.997171 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:58:55.997377 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:58:55.998100 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 01:58:55.999814 systemd[1]: Finished systemd-update-utmp.service. Dec 13 01:58:55.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:55.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:56.003568 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:58:56.003000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 01:58:56.003000 audit[1166]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffc3b80c10 a2=420 a3=0 items=0 ppid=1143 pid=1166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:56.003000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 01:58:56.004779 augenrules[1166]: No rules Dec 13 01:58:56.004921 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:58:56.007339 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:58:56.009565 systemd[1]: Starting modprobe@loop.service... Dec 13 01:58:56.010585 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:58:56.010713 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:58:56.012132 systemd[1]: Starting systemd-update-done.service... Dec 13 01:58:56.013201 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:58:56.014334 systemd[1]: Finished audit-rules.service. Dec 13 01:58:56.015848 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:58:56.015997 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:58:56.017468 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:58:56.017590 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:58:56.019097 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:58:56.019216 systemd[1]: Finished modprobe@loop.service. Dec 13 01:58:56.020766 systemd[1]: Finished systemd-update-done.service. Dec 13 01:58:56.022540 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:58:56.022650 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:58:56.026518 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:58:56.027836 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:58:56.029963 systemd[1]: Starting modprobe@drm.service... Dec 13 01:58:56.032176 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:58:56.034632 systemd[1]: Starting modprobe@loop.service... Dec 13 01:58:56.035736 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:58:56.035890 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:58:56.039267 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 01:58:56.630155 systemd-timesyncd[1150]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:58:56.630403 systemd-timesyncd[1150]: Initial clock synchronization to Fri 2024-12-13 01:58:56.630091 UTC. Dec 13 01:58:56.631068 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:58:56.632391 systemd[1]: Started systemd-timesyncd.service. Dec 13 01:58:56.634250 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:58:56.634395 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:58:56.635988 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:58:56.636116 systemd[1]: Finished modprobe@drm.service. Dec 13 01:58:56.637518 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:58:56.637665 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:58:56.639339 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:58:56.639457 systemd[1]: Finished modprobe@loop.service. Dec 13 01:58:56.639962 systemd-resolved[1149]: Positive Trust Anchors: Dec 13 01:58:56.639974 systemd-resolved[1149]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:58:56.640002 systemd-resolved[1149]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:58:56.641374 systemd[1]: Reached target time-set.target. Dec 13 01:58:56.642664 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:58:56.642707 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:58:56.643024 systemd[1]: Finished ensure-sysext.service. Dec 13 01:58:56.647937 systemd-resolved[1149]: Defaulting to hostname 'linux'. Dec 13 01:58:56.649349 systemd[1]: Started systemd-resolved.service. Dec 13 01:58:56.650458 systemd[1]: Reached target network.target. Dec 13 01:58:56.651460 systemd[1]: Reached target nss-lookup.target. Dec 13 01:58:56.652610 systemd[1]: Reached target sysinit.target. Dec 13 01:58:56.653691 systemd[1]: Started motdgen.path. Dec 13 01:58:56.654629 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 01:58:56.656217 systemd[1]: Started logrotate.timer. Dec 13 01:58:56.657237 systemd[1]: Started mdadm.timer. Dec 13 01:58:56.658140 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 01:58:56.659241 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:58:56.659275 systemd[1]: Reached target paths.target. Dec 13 01:58:56.660296 systemd[1]: Reached target timers.target. Dec 13 01:58:56.661651 systemd[1]: Listening on dbus.socket. Dec 13 01:58:56.663822 systemd[1]: Starting docker.socket... Dec 13 01:58:56.667713 systemd[1]: Listening on sshd.socket. Dec 13 01:58:56.668805 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:58:56.669259 systemd[1]: Listening on docker.socket. Dec 13 01:58:56.670322 systemd[1]: Reached target sockets.target. Dec 13 01:58:56.671341 systemd[1]: Reached target basic.target. Dec 13 01:58:56.672420 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:58:56.672451 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:58:56.673596 systemd[1]: Starting containerd.service... Dec 13 01:58:56.675573 systemd[1]: Starting dbus.service... Dec 13 01:58:56.677759 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 01:58:56.680033 systemd[1]: Starting extend-filesystems.service... Dec 13 01:58:56.681213 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 01:58:56.683279 jq[1186]: false Dec 13 01:58:56.682375 systemd[1]: Starting motdgen.service... Dec 13 01:58:56.684789 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 01:58:56.687787 systemd[1]: Starting sshd-keygen.service... Dec 13 01:58:56.694107 systemd[1]: Starting systemd-logind.service... Dec 13 01:58:56.695676 extend-filesystems[1187]: Found loop1 Dec 13 01:58:56.695676 extend-filesystems[1187]: Found sr0 Dec 13 01:58:56.695676 extend-filesystems[1187]: Found vda Dec 13 01:58:56.695676 extend-filesystems[1187]: Found vda1 Dec 13 01:58:56.695155 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:58:56.700069 extend-filesystems[1187]: Found vda2 Dec 13 01:58:56.700069 extend-filesystems[1187]: Found vda3 Dec 13 01:58:56.700069 extend-filesystems[1187]: Found usr Dec 13 01:58:56.700069 extend-filesystems[1187]: Found vda4 Dec 13 01:58:56.700069 extend-filesystems[1187]: Found vda6 Dec 13 01:58:56.700069 extend-filesystems[1187]: Found vda7 Dec 13 01:58:56.700069 extend-filesystems[1187]: Found vda9 Dec 13 01:58:56.700069 extend-filesystems[1187]: Checking size of /dev/vda9 Dec 13 01:58:56.695229 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:58:56.702122 dbus-daemon[1185]: [system] SELinux support is enabled Dec 13 01:58:56.695710 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:58:56.702059 systemd[1]: Starting update-engine.service... Dec 13 01:58:56.710464 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 01:58:56.715348 systemd[1]: Started dbus.service. Dec 13 01:58:56.722164 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:58:56.722405 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 01:58:56.722835 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:58:56.723006 systemd[1]: Finished motdgen.service. Dec 13 01:58:56.723603 extend-filesystems[1187]: Resized partition /dev/vda9 Dec 13 01:58:56.725453 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:58:56.725622 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 01:58:56.727104 extend-filesystems[1210]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 01:58:56.729349 jq[1207]: true Dec 13 01:58:56.731576 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:58:56.738458 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:58:56.738498 systemd[1]: Reached target system-config.target. Dec 13 01:58:56.739785 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:58:56.739803 systemd[1]: Reached target user-config.target. Dec 13 01:58:56.740697 jq[1211]: true Dec 13 01:58:56.752500 update_engine[1205]: I1213 01:58:56.752296 1205 main.cc:92] Flatcar Update Engine starting Dec 13 01:58:56.774682 systemd[1]: Started update-engine.service. Dec 13 01:58:56.774846 update_engine[1205]: I1213 01:58:56.774745 1205 update_check_scheduler.cc:74] Next update check in 10m43s Dec 13 01:58:56.779528 systemd[1]: Started locksmithd.service. Dec 13 01:58:56.793813 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:58:56.817944 systemd-logind[1201]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:58:56.817967 systemd-logind[1201]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:58:56.818257 extend-filesystems[1210]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:58:56.818257 extend-filesystems[1210]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:58:56.818257 extend-filesystems[1210]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:58:56.826596 extend-filesystems[1187]: Resized filesystem in /dev/vda9 Dec 13 01:58:56.827656 bash[1232]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:58:56.818345 systemd-logind[1201]: New seat seat0. Dec 13 01:58:56.827774 env[1212]: time="2024-12-13T01:58:56.826711020Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 01:58:56.820956 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:58:56.821174 systemd[1]: Finished extend-filesystems.service. Dec 13 01:58:56.827714 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 01:58:56.829608 systemd[1]: Started systemd-logind.service. Dec 13 01:58:56.848841 env[1212]: time="2024-12-13T01:58:56.848798242Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:58:56.849169 env[1212]: time="2024-12-13T01:58:56.849152346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:58:56.850570 env[1212]: time="2024-12-13T01:58:56.850519560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:58:56.850616 env[1212]: time="2024-12-13T01:58:56.850575775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:58:56.850848 env[1212]: time="2024-12-13T01:58:56.850802060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:58:56.850848 env[1212]: time="2024-12-13T01:58:56.850823590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:58:56.850848 env[1212]: time="2024-12-13T01:58:56.850837376Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 01:58:56.850848 env[1212]: time="2024-12-13T01:58:56.850848517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:58:56.850979 env[1212]: time="2024-12-13T01:58:56.850953554Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:58:56.851244 env[1212]: time="2024-12-13T01:58:56.851224442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:58:56.851453 env[1212]: time="2024-12-13T01:58:56.851361739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:58:56.851453 env[1212]: time="2024-12-13T01:58:56.851381606Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:58:56.851453 env[1212]: time="2024-12-13T01:58:56.851442571Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 01:58:56.851523 env[1212]: time="2024-12-13T01:58:56.851455114Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:58:56.854341 locksmithd[1218]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:58:56.856866 env[1212]: time="2024-12-13T01:58:56.856836376Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:58:56.856911 env[1212]: time="2024-12-13T01:58:56.856868346Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:58:56.856911 env[1212]: time="2024-12-13T01:58:56.856883164Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:58:56.856969 env[1212]: time="2024-12-13T01:58:56.856920344Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:58:56.856969 env[1212]: time="2024-12-13T01:58:56.856948176Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:58:56.857005 env[1212]: time="2024-12-13T01:58:56.856965719Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:58:56.857005 env[1212]: time="2024-12-13T01:58:56.856981007Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:58:56.857005 env[1212]: time="2024-12-13T01:58:56.856996126Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:58:56.857062 env[1212]: time="2024-12-13T01:58:56.857012787Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 01:58:56.857062 env[1212]: time="2024-12-13T01:58:56.857027434Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:58:56.857062 env[1212]: time="2024-12-13T01:58:56.857041391Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:58:56.857062 env[1212]: time="2024-12-13T01:58:56.857054385Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:58:56.857176 env[1212]: time="2024-12-13T01:58:56.857150776Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:58:56.857264 env[1212]: time="2024-12-13T01:58:56.857241666Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:58:56.857570 env[1212]: time="2024-12-13T01:58:56.857534756Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:58:56.857614 env[1212]: time="2024-12-13T01:58:56.857582946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:58:56.857614 env[1212]: time="2024-12-13T01:58:56.857598275Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:58:56.857684 env[1212]: time="2024-12-13T01:58:56.857667936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:58:56.857708 env[1212]: time="2024-12-13T01:58:56.857683385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:58:56.857708 env[1212]: time="2024-12-13T01:58:56.857697381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:58:56.857755 env[1212]: time="2024-12-13T01:58:56.857721997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:58:56.857755 env[1212]: time="2024-12-13T01:58:56.857735863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:58:56.857755 env[1212]: time="2024-12-13T01:58:56.857749639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:58:56.858002 env[1212]: time="2024-12-13T01:58:56.857763545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:58:56.858002 env[1212]: time="2024-12-13T01:58:56.857777361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:58:56.858002 env[1212]: time="2024-12-13T01:58:56.857792840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:58:56.858002 env[1212]: time="2024-12-13T01:58:56.857955004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:58:56.858002 env[1212]: time="2024-12-13T01:58:56.857974370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:58:56.858002 env[1212]: time="2024-12-13T01:58:56.857988126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:58:56.858153 env[1212]: time="2024-12-13T01:58:56.858002273Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:58:56.858153 env[1212]: time="2024-12-13T01:58:56.858019545Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 01:58:56.858153 env[1212]: time="2024-12-13T01:58:56.858032369Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:58:56.858153 env[1212]: time="2024-12-13T01:58:56.858059310Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 01:58:56.858153 env[1212]: time="2024-12-13T01:58:56.858102601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:58:56.858397 env[1212]: time="2024-12-13T01:58:56.858320549Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:58:56.859314 env[1212]: time="2024-12-13T01:58:56.858396522Z" level=info msg="Connect containerd service" Dec 13 01:58:56.859314 env[1212]: time="2024-12-13T01:58:56.858443861Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:58:56.859314 env[1212]: time="2024-12-13T01:58:56.859134025Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:58:56.859391 env[1212]: time="2024-12-13T01:58:56.859323410Z" level=info msg="Start subscribing containerd event" Dec 13 01:58:56.859391 env[1212]: time="2024-12-13T01:58:56.859380377Z" level=info msg="Start recovering state" Dec 13 01:58:56.859569 env[1212]: time="2024-12-13T01:58:56.859434999Z" level=info msg="Start event monitor" Dec 13 01:58:56.859569 env[1212]: time="2024-12-13T01:58:56.859452733Z" level=info msg="Start snapshots syncer" Dec 13 01:58:56.859569 env[1212]: time="2024-12-13T01:58:56.859463293Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:58:56.859569 env[1212]: time="2024-12-13T01:58:56.859478441Z" level=info msg="Start streaming server" Dec 13 01:58:56.859785 env[1212]: time="2024-12-13T01:58:56.859767974Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:58:56.859842 env[1212]: time="2024-12-13T01:58:56.859830702Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:58:56.860043 systemd[1]: Started containerd.service. Dec 13 01:58:56.863819 env[1212]: time="2024-12-13T01:58:56.863701661Z" level=info msg="containerd successfully booted in 0.048015s" Dec 13 01:58:57.017191 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:58:57.017253 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:58:57.620778 systemd-networkd[1032]: eth0: Gained IPv6LL Dec 13 01:58:57.622406 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 01:58:57.623745 systemd[1]: Reached target network-online.target. Dec 13 01:58:57.627168 systemd[1]: Starting kubelet.service... Dec 13 01:58:57.807652 sshd_keygen[1202]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:58:57.829119 systemd[1]: Finished sshd-keygen.service. Dec 13 01:58:57.831746 systemd[1]: Starting issuegen.service... Dec 13 01:58:57.838115 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:58:57.838329 systemd[1]: Finished issuegen.service. Dec 13 01:58:57.841307 systemd[1]: Starting systemd-user-sessions.service... Dec 13 01:58:57.849122 systemd[1]: Finished systemd-user-sessions.service. Dec 13 01:58:57.851767 systemd[1]: Started getty@tty1.service. Dec 13 01:58:57.853620 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 01:58:57.854994 systemd[1]: Reached target getty.target. Dec 13 01:58:58.426669 systemd[1]: Started kubelet.service. Dec 13 01:58:58.427854 systemd[1]: Reached target multi-user.target. Dec 13 01:58:58.429688 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 01:58:58.437184 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 01:58:58.437415 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 01:58:58.438767 systemd[1]: Startup finished in 845ms (kernel) + 4.464s (initrd) + 7.478s (userspace) = 12.788s. Dec 13 01:58:58.956239 kubelet[1261]: E1213 01:58:58.956145 1261 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:58:58.958047 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:58:58.958176 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:58:58.958430 systemd[1]: kubelet.service: Consumed 1.223s CPU time. Dec 13 01:59:06.566129 systemd[1]: Created slice system-sshd.slice. Dec 13 01:59:06.567119 systemd[1]: Started sshd@0-10.0.0.55:22-10.0.0.1:50616.service. Dec 13 01:59:06.606203 sshd[1271]: Accepted publickey for core from 10.0.0.1 port 50616 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:06.607949 sshd[1271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:06.615343 systemd[1]: Created slice user-500.slice. Dec 13 01:59:06.616275 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 01:59:06.617817 systemd-logind[1201]: New session 1 of user core. Dec 13 01:59:06.624123 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 01:59:06.625408 systemd[1]: Starting user@500.service... Dec 13 01:59:06.628097 (systemd)[1274]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:06.697398 systemd[1274]: Queued start job for default target default.target. Dec 13 01:59:06.697858 systemd[1274]: Reached target paths.target. Dec 13 01:59:06.697879 systemd[1274]: Reached target sockets.target. Dec 13 01:59:06.697891 systemd[1274]: Reached target timers.target. Dec 13 01:59:06.697902 systemd[1274]: Reached target basic.target. Dec 13 01:59:06.697934 systemd[1274]: Reached target default.target. Dec 13 01:59:06.697956 systemd[1274]: Startup finished in 65ms. Dec 13 01:59:06.698061 systemd[1]: Started user@500.service. Dec 13 01:59:06.699374 systemd[1]: Started session-1.scope. Dec 13 01:59:06.749734 systemd[1]: Started sshd@1-10.0.0.55:22-10.0.0.1:50626.service. Dec 13 01:59:06.786588 sshd[1283]: Accepted publickey for core from 10.0.0.1 port 50626 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:06.787737 sshd[1283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:06.791269 systemd-logind[1201]: New session 2 of user core. Dec 13 01:59:06.792406 systemd[1]: Started session-2.scope. Dec 13 01:59:06.843901 sshd[1283]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:06.846724 systemd[1]: sshd@1-10.0.0.55:22-10.0.0.1:50626.service: Deactivated successfully. Dec 13 01:59:06.847285 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:59:06.847804 systemd-logind[1201]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:59:06.848882 systemd[1]: Started sshd@2-10.0.0.55:22-10.0.0.1:50636.service. Dec 13 01:59:06.849622 systemd-logind[1201]: Removed session 2. Dec 13 01:59:06.884234 sshd[1289]: Accepted publickey for core from 10.0.0.1 port 50636 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:06.885272 sshd[1289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:06.888533 systemd-logind[1201]: New session 3 of user core. Dec 13 01:59:06.889457 systemd[1]: Started session-3.scope. Dec 13 01:59:06.939592 sshd[1289]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:06.942478 systemd[1]: sshd@2-10.0.0.55:22-10.0.0.1:50636.service: Deactivated successfully. Dec 13 01:59:06.943116 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:59:06.943603 systemd-logind[1201]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:59:06.944704 systemd[1]: Started sshd@3-10.0.0.55:22-10.0.0.1:50650.service. Dec 13 01:59:06.945411 systemd-logind[1201]: Removed session 3. Dec 13 01:59:06.978193 sshd[1295]: Accepted publickey for core from 10.0.0.1 port 50650 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:06.979139 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:06.982133 systemd-logind[1201]: New session 4 of user core. Dec 13 01:59:06.982988 systemd[1]: Started session-4.scope. Dec 13 01:59:07.035493 sshd[1295]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:07.037971 systemd[1]: sshd@3-10.0.0.55:22-10.0.0.1:50650.service: Deactivated successfully. Dec 13 01:59:07.038581 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:59:07.039092 systemd-logind[1201]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:59:07.039977 systemd[1]: Started sshd@4-10.0.0.55:22-10.0.0.1:50656.service. Dec 13 01:59:07.040660 systemd-logind[1201]: Removed session 4. Dec 13 01:59:07.075690 sshd[1301]: Accepted publickey for core from 10.0.0.1 port 50656 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:07.076652 sshd[1301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:07.079760 systemd-logind[1201]: New session 5 of user core. Dec 13 01:59:07.080424 systemd[1]: Started session-5.scope. Dec 13 01:59:07.134856 sudo[1305]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:59:07.135071 sudo[1305]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 01:59:07.146564 systemd[1]: Starting coreos-metadata.service... Dec 13 01:59:07.152762 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:59:07.152879 systemd[1]: Finished coreos-metadata.service. Dec 13 01:59:07.568259 systemd[1]: Stopped kubelet.service. Dec 13 01:59:07.568399 systemd[1]: kubelet.service: Consumed 1.223s CPU time. Dec 13 01:59:07.570169 systemd[1]: Starting kubelet.service... Dec 13 01:59:07.583425 systemd[1]: Reloading. Dec 13 01:59:07.654669 /usr/lib/systemd/system-generators/torcx-generator[1372]: time="2024-12-13T01:59:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:59:07.654698 /usr/lib/systemd/system-generators/torcx-generator[1372]: time="2024-12-13T01:59:07Z" level=info msg="torcx already run" Dec 13 01:59:08.280615 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:59:08.280634 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:59:08.297369 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:59:08.367649 systemd[1]: Started kubelet.service. Dec 13 01:59:08.370116 systemd[1]: Stopping kubelet.service... Dec 13 01:59:08.370388 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:59:08.370519 systemd[1]: Stopped kubelet.service. Dec 13 01:59:08.371725 systemd[1]: Starting kubelet.service... Dec 13 01:59:08.441708 systemd[1]: Started kubelet.service. Dec 13 01:59:08.499669 kubelet[1420]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:59:08.499669 kubelet[1420]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:59:08.499669 kubelet[1420]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:59:08.500168 kubelet[1420]: I1213 01:59:08.499770 1420 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:59:08.714449 kubelet[1420]: I1213 01:59:08.714266 1420 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:59:08.714449 kubelet[1420]: I1213 01:59:08.714320 1420 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:59:08.714723 kubelet[1420]: I1213 01:59:08.714696 1420 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:59:08.736499 kubelet[1420]: I1213 01:59:08.736452 1420 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:59:08.750335 kubelet[1420]: I1213 01:59:08.750276 1420 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:59:08.750535 kubelet[1420]: I1213 01:59:08.750491 1420 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:59:08.750699 kubelet[1420]: I1213 01:59:08.750672 1420 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:59:08.750699 kubelet[1420]: I1213 01:59:08.750697 1420 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:59:08.750699 kubelet[1420]: I1213 01:59:08.750705 1420 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:59:08.750926 kubelet[1420]: I1213 01:59:08.750806 1420 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:59:08.750926 kubelet[1420]: I1213 01:59:08.750879 1420 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:59:08.750926 kubelet[1420]: I1213 01:59:08.750893 1420 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:59:08.750926 kubelet[1420]: I1213 01:59:08.750920 1420 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:59:08.751068 kubelet[1420]: I1213 01:59:08.750937 1420 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:59:08.751068 kubelet[1420]: E1213 01:59:08.751013 1420 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:08.751264 kubelet[1420]: E1213 01:59:08.751236 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:08.755492 kubelet[1420]: I1213 01:59:08.755463 1420 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 01:59:08.760607 kubelet[1420]: I1213 01:59:08.760569 1420 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:59:08.760683 kubelet[1420]: W1213 01:59:08.760657 1420 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:59:08.761218 kubelet[1420]: I1213 01:59:08.761191 1420 server.go:1256] "Started kubelet" Dec 13 01:59:08.761404 kubelet[1420]: I1213 01:59:08.761379 1420 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:59:08.761945 kubelet[1420]: I1213 01:59:08.761930 1420 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:59:08.762122 kubelet[1420]: I1213 01:59:08.762066 1420 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:59:08.764169 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 01:59:08.764365 kubelet[1420]: I1213 01:59:08.764306 1420 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:59:08.765171 kubelet[1420]: I1213 01:59:08.764923 1420 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:59:08.769673 kubelet[1420]: E1213 01:59:08.769655 1420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.55\" not found" Dec 13 01:59:08.769775 kubelet[1420]: I1213 01:59:08.769760 1420 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:59:08.769919 kubelet[1420]: I1213 01:59:08.769904 1420 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:59:08.770100 kubelet[1420]: I1213 01:59:08.770086 1420 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:59:08.771264 kubelet[1420]: I1213 01:59:08.771251 1420 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:59:08.771570 kubelet[1420]: I1213 01:59:08.771509 1420 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:59:08.774276 kubelet[1420]: I1213 01:59:08.774134 1420 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:59:08.774505 kubelet[1420]: E1213 01:59:08.774472 1420 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:59:08.774749 kubelet[1420]: E1213 01:59:08.774732 1420 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.55\" not found" node="10.0.0.55" Dec 13 01:59:08.791803 kubelet[1420]: I1213 01:59:08.791758 1420 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:59:08.791803 kubelet[1420]: I1213 01:59:08.791795 1420 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:59:08.791803 kubelet[1420]: I1213 01:59:08.791814 1420 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:59:08.871219 kubelet[1420]: I1213 01:59:08.871179 1420 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.55" Dec 13 01:59:08.980911 kubelet[1420]: I1213 01:59:08.980851 1420 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.55" Dec 13 01:59:09.037977 kubelet[1420]: E1213 01:59:09.037912 1420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.55\" not found" Dec 13 01:59:09.072969 kubelet[1420]: I1213 01:59:09.072913 1420 policy_none.go:49] "None policy: Start" Dec 13 01:59:09.073799 kubelet[1420]: I1213 01:59:09.073782 1420 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:59:09.073878 kubelet[1420]: I1213 01:59:09.073804 1420 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:59:09.082253 systemd[1]: Created slice kubepods.slice. Dec 13 01:59:09.087379 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 01:59:09.092941 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 01:59:09.100517 kubelet[1420]: I1213 01:59:09.100456 1420 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:59:09.100832 kubelet[1420]: I1213 01:59:09.100808 1420 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:59:09.126029 kubelet[1420]: I1213 01:59:09.125984 1420 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:59:09.127147 kubelet[1420]: I1213 01:59:09.127117 1420 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:59:09.127210 kubelet[1420]: I1213 01:59:09.127173 1420 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:59:09.127234 kubelet[1420]: I1213 01:59:09.127212 1420 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:59:09.127285 kubelet[1420]: E1213 01:59:09.127269 1420 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 01:59:09.139702 kubelet[1420]: I1213 01:59:09.139661 1420 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 01:59:09.140036 env[1212]: time="2024-12-13T01:59:09.139976959Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:59:09.140353 kubelet[1420]: I1213 01:59:09.140146 1420 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 01:59:09.717288 kubelet[1420]: I1213 01:59:09.717237 1420 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 01:59:09.717768 kubelet[1420]: W1213 01:59:09.717455 1420 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:59:09.717768 kubelet[1420]: W1213 01:59:09.717475 1420 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:59:09.717768 kubelet[1420]: W1213 01:59:09.717501 1420 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:59:09.751364 kubelet[1420]: I1213 01:59:09.751323 1420 apiserver.go:52] "Watching apiserver" Dec 13 01:59:09.751660 kubelet[1420]: E1213 01:59:09.751331 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:09.755304 kubelet[1420]: I1213 01:59:09.755246 1420 topology_manager.go:215] "Topology Admit Handler" podUID="c5c889e4-1374-4414-962b-b45d88f04d9b" podNamespace="kube-system" podName="cilium-v7cvd" Dec 13 01:59:09.755614 kubelet[1420]: I1213 01:59:09.755354 1420 topology_manager.go:215] "Topology Admit Handler" podUID="88ae0bb8-8235-47a7-9698-2444fa1fe48d" podNamespace="kube-system" podName="kube-proxy-ngrgk" Dec 13 01:59:09.760572 systemd[1]: Created slice kubepods-besteffort-pod88ae0bb8_8235_47a7_9698_2444fa1fe48d.slice. Dec 13 01:59:09.770634 systemd[1]: Created slice kubepods-burstable-podc5c889e4_1374_4414_962b_b45d88f04d9b.slice. Dec 13 01:59:09.770977 kubelet[1420]: I1213 01:59:09.770680 1420 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:59:09.775522 kubelet[1420]: I1213 01:59:09.775494 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88ae0bb8-8235-47a7-9698-2444fa1fe48d-lib-modules\") pod \"kube-proxy-ngrgk\" (UID: \"88ae0bb8-8235-47a7-9698-2444fa1fe48d\") " pod="kube-system/kube-proxy-ngrgk" Dec 13 01:59:09.775667 kubelet[1420]: I1213 01:59:09.775530 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-bpf-maps\") pod \"cilium-v7cvd\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " pod="kube-system/cilium-v7cvd" Dec 13 01:59:09.775667 kubelet[1420]: I1213 01:59:09.775579 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-cni-path\") pod \"cilium-v7cvd\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " pod="kube-system/cilium-v7cvd" Dec 13 01:59:09.775667 kubelet[1420]: I1213 01:59:09.775608 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-lib-modules\") pod \"cilium-v7cvd\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " pod="kube-system/cilium-v7cvd" Dec 13 01:59:09.775667 kubelet[1420]: I1213 01:59:09.775632 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5c889e4-1374-4414-962b-b45d88f04d9b-cilium-config-path\") pod \"cilium-v7cvd\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " pod="kube-system/cilium-v7cvd" Dec 13 01:59:09.775667 kubelet[1420]: I1213 01:59:09.775660 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/88ae0bb8-8235-47a7-9698-2444fa1fe48d-kube-proxy\") pod \"kube-proxy-ngrgk\" (UID: \"88ae0bb8-8235-47a7-9698-2444fa1fe48d\") " pod="kube-system/kube-proxy-ngrgk" Dec 13 01:59:09.775838 kubelet[1420]: I1213 01:59:09.775699 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-xtables-lock\") pod \"cilium-v7cvd\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " pod="kube-system/cilium-v7cvd" Dec 13 01:59:09.775838 kubelet[1420]: I1213 01:59:09.775726 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-host-proc-sys-net\") pod \"cilium-v7cvd\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " pod="kube-system/cilium-v7cvd" Dec 13 01:59:09.775838 kubelet[1420]: I1213 01:59:09.775749 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-hostproc\") pod \"cilium-v7cvd\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " pod="kube-system/cilium-v7cvd" Dec 13 01:59:09.775838 kubelet[1420]: I1213 01:59:09.775770 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-etc-cni-netd\") pod \"cilium-v7cvd\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " pod="kube-system/cilium-v7cvd" Dec 13 01:59:09.775838 kubelet[1420]: I1213 01:59:09.775807 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmcmm\" (UniqueName: \"kubernetes.io/projected/c5c889e4-1374-4414-962b-b45d88f04d9b-kube-api-access-kmcmm\") pod \"cilium-v7cvd\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " pod="kube-system/cilium-v7cvd" Dec 13 01:59:09.775989 kubelet[1420]: I1213 01:59:09.775842 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88ae0bb8-8235-47a7-9698-2444fa1fe48d-xtables-lock\") pod \"kube-proxy-ngrgk\" (UID: \"88ae0bb8-8235-47a7-9698-2444fa1fe48d\") " pod="kube-system/kube-proxy-ngrgk" Dec 13 01:59:09.775989 kubelet[1420]: I1213 01:59:09.775866 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9hl7\" (UniqueName: \"kubernetes.io/projected/88ae0bb8-8235-47a7-9698-2444fa1fe48d-kube-api-access-j9hl7\") pod \"kube-proxy-ngrgk\" (UID: \"88ae0bb8-8235-47a7-9698-2444fa1fe48d\") " pod="kube-system/kube-proxy-ngrgk" Dec 13 01:59:09.775989 kubelet[1420]: I1213 01:59:09.775890 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-cilium-run\") pod \"cilium-v7cvd\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " pod="kube-system/cilium-v7cvd" Dec 13 01:59:09.775989 kubelet[1420]: I1213 01:59:09.775923 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-cilium-cgroup\") pod \"cilium-v7cvd\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " pod="kube-system/cilium-v7cvd" Dec 13 01:59:09.775989 kubelet[1420]: I1213 01:59:09.775949 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c5c889e4-1374-4414-962b-b45d88f04d9b-clustermesh-secrets\") pod \"cilium-v7cvd\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " pod="kube-system/cilium-v7cvd" Dec 13 01:59:09.776136 kubelet[1420]: I1213 01:59:09.775977 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-host-proc-sys-kernel\") pod \"cilium-v7cvd\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " pod="kube-system/cilium-v7cvd" Dec 13 01:59:09.776136 kubelet[1420]: I1213 01:59:09.776002 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c5c889e4-1374-4414-962b-b45d88f04d9b-hubble-tls\") pod \"cilium-v7cvd\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " pod="kube-system/cilium-v7cvd" Dec 13 01:59:09.967397 sudo[1305]: pam_unix(sudo:session): session closed for user root Dec 13 01:59:09.969140 sshd[1301]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:09.971647 systemd[1]: sshd@4-10.0.0.55:22-10.0.0.1:50656.service: Deactivated successfully. Dec 13 01:59:09.972449 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:59:09.973142 systemd-logind[1201]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:59:09.973885 systemd-logind[1201]: Removed session 5. Dec 13 01:59:10.068515 kubelet[1420]: E1213 01:59:10.068476 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:10.069301 env[1212]: time="2024-12-13T01:59:10.069240941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ngrgk,Uid:88ae0bb8-8235-47a7-9698-2444fa1fe48d,Namespace:kube-system,Attempt:0,}" Dec 13 01:59:10.080003 kubelet[1420]: E1213 01:59:10.079948 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:10.080533 env[1212]: time="2024-12-13T01:59:10.080497541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v7cvd,Uid:c5c889e4-1374-4414-962b-b45d88f04d9b,Namespace:kube-system,Attempt:0,}" Dec 13 01:59:10.752763 kubelet[1420]: E1213 01:59:10.752709 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:11.150356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2417583251.mount: Deactivated successfully. Dec 13 01:59:11.508016 env[1212]: time="2024-12-13T01:59:11.507954973Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:11.510201 env[1212]: time="2024-12-13T01:59:11.510171700Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:11.511423 env[1212]: time="2024-12-13T01:59:11.511339310Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:11.516019 env[1212]: time="2024-12-13T01:59:11.515978220Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:11.518888 env[1212]: time="2024-12-13T01:59:11.518811573Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:11.521202 env[1212]: time="2024-12-13T01:59:11.521174164Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:11.523454 env[1212]: time="2024-12-13T01:59:11.523423833Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:11.525044 env[1212]: time="2024-12-13T01:59:11.525020918Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:11.549175 env[1212]: time="2024-12-13T01:59:11.549084656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:59:11.549175 env[1212]: time="2024-12-13T01:59:11.549128608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:59:11.549175 env[1212]: time="2024-12-13T01:59:11.549141573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:59:11.549369 env[1212]: time="2024-12-13T01:59:11.549212796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:59:11.549369 env[1212]: time="2024-12-13T01:59:11.549262359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:59:11.549369 env[1212]: time="2024-12-13T01:59:11.549298347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:59:11.549369 env[1212]: time="2024-12-13T01:59:11.549292326Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3 pid=1475 runtime=io.containerd.runc.v2 Dec 13 01:59:11.549483 env[1212]: time="2024-12-13T01:59:11.549446785Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/558075d284a136156c07a95e1a0ea9ed0c977099e4b5611b43ee1b92e39e19c9 pid=1487 runtime=io.containerd.runc.v2 Dec 13 01:59:11.563180 systemd[1]: Started cri-containerd-faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3.scope. Dec 13 01:59:11.572749 systemd[1]: Started cri-containerd-558075d284a136156c07a95e1a0ea9ed0c977099e4b5611b43ee1b92e39e19c9.scope. Dec 13 01:59:11.593459 env[1212]: time="2024-12-13T01:59:11.593391547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v7cvd,Uid:c5c889e4-1374-4414-962b-b45d88f04d9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3\"" Dec 13 01:59:11.594217 kubelet[1420]: E1213 01:59:11.594178 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:11.595445 env[1212]: time="2024-12-13T01:59:11.595416936Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:59:11.596715 env[1212]: time="2024-12-13T01:59:11.596666429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ngrgk,Uid:88ae0bb8-8235-47a7-9698-2444fa1fe48d,Namespace:kube-system,Attempt:0,} returns sandbox id \"558075d284a136156c07a95e1a0ea9ed0c977099e4b5611b43ee1b92e39e19c9\"" Dec 13 01:59:11.597284 kubelet[1420]: E1213 01:59:11.597265 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:11.753482 kubelet[1420]: E1213 01:59:11.753401 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:12.754028 kubelet[1420]: E1213 01:59:12.753953 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:13.754411 kubelet[1420]: E1213 01:59:13.754377 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:14.754647 kubelet[1420]: E1213 01:59:14.754597 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:15.755589 kubelet[1420]: E1213 01:59:15.755498 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:16.756584 kubelet[1420]: E1213 01:59:16.756474 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:17.757144 kubelet[1420]: E1213 01:59:17.757098 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:18.758005 kubelet[1420]: E1213 01:59:18.757927 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:19.758326 kubelet[1420]: E1213 01:59:19.758262 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:20.700168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount470541851.mount: Deactivated successfully. Dec 13 01:59:20.758494 kubelet[1420]: E1213 01:59:20.758434 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:21.759407 kubelet[1420]: E1213 01:59:21.759362 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:22.760358 kubelet[1420]: E1213 01:59:22.760288 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:23.760934 kubelet[1420]: E1213 01:59:23.760881 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:24.673127 env[1212]: time="2024-12-13T01:59:24.673042902Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:24.675404 env[1212]: time="2024-12-13T01:59:24.675370407Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:24.677130 env[1212]: time="2024-12-13T01:59:24.677050999Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:24.677760 env[1212]: time="2024-12-13T01:59:24.677719944Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:59:24.678629 env[1212]: time="2024-12-13T01:59:24.678589204Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:59:24.680082 env[1212]: time="2024-12-13T01:59:24.680037760Z" level=info msg="CreateContainer within sandbox \"faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:59:24.698768 env[1212]: time="2024-12-13T01:59:24.698709266Z" level=info msg="CreateContainer within sandbox \"faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"26d5a094602d557dc6df97bac96c0f108abd29ee7219cebda9a4dea1a56653f0\"" Dec 13 01:59:24.699431 env[1212]: time="2024-12-13T01:59:24.699387678Z" level=info msg="StartContainer for \"26d5a094602d557dc6df97bac96c0f108abd29ee7219cebda9a4dea1a56653f0\"" Dec 13 01:59:24.761836 kubelet[1420]: E1213 01:59:24.761791 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:24.771597 systemd[1]: Started cri-containerd-26d5a094602d557dc6df97bac96c0f108abd29ee7219cebda9a4dea1a56653f0.scope. Dec 13 01:59:24.828095 env[1212]: time="2024-12-13T01:59:24.828024819Z" level=info msg="StartContainer for \"26d5a094602d557dc6df97bac96c0f108abd29ee7219cebda9a4dea1a56653f0\" returns successfully" Dec 13 01:59:24.836028 systemd[1]: cri-containerd-26d5a094602d557dc6df97bac96c0f108abd29ee7219cebda9a4dea1a56653f0.scope: Deactivated successfully. Dec 13 01:59:25.183933 kubelet[1420]: E1213 01:59:25.183897 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:25.690443 systemd[1]: run-containerd-runc-k8s.io-26d5a094602d557dc6df97bac96c0f108abd29ee7219cebda9a4dea1a56653f0-runc.T3Cn3A.mount: Deactivated successfully. Dec 13 01:59:25.690514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26d5a094602d557dc6df97bac96c0f108abd29ee7219cebda9a4dea1a56653f0-rootfs.mount: Deactivated successfully. Dec 13 01:59:25.742726 env[1212]: time="2024-12-13T01:59:25.742665727Z" level=info msg="shim disconnected" id=26d5a094602d557dc6df97bac96c0f108abd29ee7219cebda9a4dea1a56653f0 Dec 13 01:59:25.742726 env[1212]: time="2024-12-13T01:59:25.742721061Z" level=warning msg="cleaning up after shim disconnected" id=26d5a094602d557dc6df97bac96c0f108abd29ee7219cebda9a4dea1a56653f0 namespace=k8s.io Dec 13 01:59:25.742726 env[1212]: time="2024-12-13T01:59:25.742732893Z" level=info msg="cleaning up dead shim" Dec 13 01:59:25.751536 env[1212]: time="2024-12-13T01:59:25.751472613Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:59:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1602 runtime=io.containerd.runc.v2\n" Dec 13 01:59:25.762905 kubelet[1420]: E1213 01:59:25.762845 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:26.188421 kubelet[1420]: E1213 01:59:26.188367 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:26.200747 env[1212]: time="2024-12-13T01:59:26.200681460Z" level=info msg="CreateContainer within sandbox \"faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:59:26.252310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1149298543.mount: Deactivated successfully. Dec 13 01:59:26.266056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269891304.mount: Deactivated successfully. Dec 13 01:59:26.364278 env[1212]: time="2024-12-13T01:59:26.364200006Z" level=info msg="CreateContainer within sandbox \"faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c9ee831d153e49cb9c482cbf445a05cac7eba9b83e6a3ad8d1a8c52e7113f411\"" Dec 13 01:59:26.364801 env[1212]: time="2024-12-13T01:59:26.364771858Z" level=info msg="StartContainer for \"c9ee831d153e49cb9c482cbf445a05cac7eba9b83e6a3ad8d1a8c52e7113f411\"" Dec 13 01:59:26.393430 systemd[1]: Started cri-containerd-c9ee831d153e49cb9c482cbf445a05cac7eba9b83e6a3ad8d1a8c52e7113f411.scope. Dec 13 01:59:26.451355 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:59:26.451540 systemd[1]: Stopped systemd-sysctl.service. Dec 13 01:59:26.451733 systemd[1]: Stopping systemd-sysctl.service... Dec 13 01:59:26.453137 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:59:26.456848 systemd[1]: cri-containerd-c9ee831d153e49cb9c482cbf445a05cac7eba9b83e6a3ad8d1a8c52e7113f411.scope: Deactivated successfully. Dec 13 01:59:26.460354 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:59:26.534645 env[1212]: time="2024-12-13T01:59:26.534578666Z" level=info msg="StartContainer for \"c9ee831d153e49cb9c482cbf445a05cac7eba9b83e6a3ad8d1a8c52e7113f411\" returns successfully" Dec 13 01:59:26.763086 kubelet[1420]: E1213 01:59:26.763055 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:26.985637 env[1212]: time="2024-12-13T01:59:26.985520363Z" level=info msg="shim disconnected" id=c9ee831d153e49cb9c482cbf445a05cac7eba9b83e6a3ad8d1a8c52e7113f411 Dec 13 01:59:26.985637 env[1212]: time="2024-12-13T01:59:26.985606745Z" level=warning msg="cleaning up after shim disconnected" id=c9ee831d153e49cb9c482cbf445a05cac7eba9b83e6a3ad8d1a8c52e7113f411 namespace=k8s.io Dec 13 01:59:26.985637 env[1212]: time="2024-12-13T01:59:26.985619749Z" level=info msg="cleaning up dead shim" Dec 13 01:59:26.998428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1587888336.mount: Deactivated successfully. Dec 13 01:59:27.008466 env[1212]: time="2024-12-13T01:59:27.008406493Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:59:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1667 runtime=io.containerd.runc.v2\n" Dec 13 01:59:27.191361 kubelet[1420]: E1213 01:59:27.190882 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:27.192889 env[1212]: time="2024-12-13T01:59:27.192841725Z" level=info msg="CreateContainer within sandbox \"faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:59:27.216228 env[1212]: time="2024-12-13T01:59:27.216167529Z" level=info msg="CreateContainer within sandbox \"faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d5ea6a8c7facc77a0736d5566ad12d5d6efd37cacf0ec9bb473d5633c818d838\"" Dec 13 01:59:27.216846 env[1212]: time="2024-12-13T01:59:27.216801528Z" level=info msg="StartContainer for \"d5ea6a8c7facc77a0736d5566ad12d5d6efd37cacf0ec9bb473d5633c818d838\"" Dec 13 01:59:27.232938 systemd[1]: Started cri-containerd-d5ea6a8c7facc77a0736d5566ad12d5d6efd37cacf0ec9bb473d5633c818d838.scope. Dec 13 01:59:27.323594 systemd[1]: cri-containerd-d5ea6a8c7facc77a0736d5566ad12d5d6efd37cacf0ec9bb473d5633c818d838.scope: Deactivated successfully. Dec 13 01:59:27.325398 env[1212]: time="2024-12-13T01:59:27.325343032Z" level=info msg="StartContainer for \"d5ea6a8c7facc77a0736d5566ad12d5d6efd37cacf0ec9bb473d5633c818d838\" returns successfully" Dec 13 01:59:27.691038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5ea6a8c7facc77a0736d5566ad12d5d6efd37cacf0ec9bb473d5633c818d838-rootfs.mount: Deactivated successfully. Dec 13 01:59:27.764058 kubelet[1420]: E1213 01:59:27.763987 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:28.061916 env[1212]: time="2024-12-13T01:59:28.061847641Z" level=info msg="shim disconnected" id=d5ea6a8c7facc77a0736d5566ad12d5d6efd37cacf0ec9bb473d5633c818d838 Dec 13 01:59:28.062402 env[1212]: time="2024-12-13T01:59:28.061944793Z" level=warning msg="cleaning up after shim disconnected" id=d5ea6a8c7facc77a0736d5566ad12d5d6efd37cacf0ec9bb473d5633c818d838 namespace=k8s.io Dec 13 01:59:28.062402 env[1212]: time="2024-12-13T01:59:28.061959120Z" level=info msg="cleaning up dead shim" Dec 13 01:59:28.139827 env[1212]: time="2024-12-13T01:59:28.139771145Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:59:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1724 runtime=io.containerd.runc.v2\n" Dec 13 01:59:28.280525 kubelet[1420]: E1213 01:59:28.280479 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:28.282356 env[1212]: time="2024-12-13T01:59:28.282300348Z" level=info msg="CreateContainer within sandbox \"faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:59:28.400316 env[1212]: time="2024-12-13T01:59:28.400169455Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:28.424731 env[1212]: time="2024-12-13T01:59:28.424670303Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:28.431326 env[1212]: time="2024-12-13T01:59:28.431232580Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:28.434287 env[1212]: time="2024-12-13T01:59:28.434228067Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:28.434932 env[1212]: time="2024-12-13T01:59:28.434886782Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:59:28.436652 env[1212]: time="2024-12-13T01:59:28.436607059Z" level=info msg="CreateContainer within sandbox \"558075d284a136156c07a95e1a0ea9ed0c977099e4b5611b43ee1b92e39e19c9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:59:28.440370 env[1212]: time="2024-12-13T01:59:28.440334829Z" level=info msg="CreateContainer within sandbox \"faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0c5fdd6eb6c93b3fc4e6ab8ce592570e292b4f260149c5b73c686b38d7d2c523\"" Dec 13 01:59:28.440812 env[1212]: time="2024-12-13T01:59:28.440773723Z" level=info msg="StartContainer for \"0c5fdd6eb6c93b3fc4e6ab8ce592570e292b4f260149c5b73c686b38d7d2c523\"" Dec 13 01:59:28.457873 systemd[1]: Started cri-containerd-0c5fdd6eb6c93b3fc4e6ab8ce592570e292b4f260149c5b73c686b38d7d2c523.scope. Dec 13 01:59:28.462914 env[1212]: time="2024-12-13T01:59:28.462872305Z" level=info msg="CreateContainer within sandbox \"558075d284a136156c07a95e1a0ea9ed0c977099e4b5611b43ee1b92e39e19c9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"88c8e34d49ffefe6427d8f8f00322b23b53dade1953ef898f9c3d65e8fb8c488\"" Dec 13 01:59:28.463801 env[1212]: time="2024-12-13T01:59:28.463769418Z" level=info msg="StartContainer for \"88c8e34d49ffefe6427d8f8f00322b23b53dade1953ef898f9c3d65e8fb8c488\"" Dec 13 01:59:28.533859 systemd[1]: Started cri-containerd-88c8e34d49ffefe6427d8f8f00322b23b53dade1953ef898f9c3d65e8fb8c488.scope. Dec 13 01:59:28.545951 systemd[1]: cri-containerd-0c5fdd6eb6c93b3fc4e6ab8ce592570e292b4f260149c5b73c686b38d7d2c523.scope: Deactivated successfully. Dec 13 01:59:28.548168 env[1212]: time="2024-12-13T01:59:28.548111720Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5c889e4_1374_4414_962b_b45d88f04d9b.slice/cri-containerd-0c5fdd6eb6c93b3fc4e6ab8ce592570e292b4f260149c5b73c686b38d7d2c523.scope/memory.events\": no such file or directory" Dec 13 01:59:28.551528 env[1212]: time="2024-12-13T01:59:28.551487961Z" level=info msg="StartContainer for \"0c5fdd6eb6c93b3fc4e6ab8ce592570e292b4f260149c5b73c686b38d7d2c523\" returns successfully" Dec 13 01:59:28.691349 systemd[1]: run-containerd-runc-k8s.io-0c5fdd6eb6c93b3fc4e6ab8ce592570e292b4f260149c5b73c686b38d7d2c523-runc.7LO9Ll.mount: Deactivated successfully. Dec 13 01:59:28.691432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c5fdd6eb6c93b3fc4e6ab8ce592570e292b4f260149c5b73c686b38d7d2c523-rootfs.mount: Deactivated successfully. Dec 13 01:59:28.751936 kubelet[1420]: E1213 01:59:28.751896 1420 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:28.765078 kubelet[1420]: E1213 01:59:28.765029 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:29.159613 env[1212]: time="2024-12-13T01:59:29.159525733Z" level=info msg="StartContainer for \"88c8e34d49ffefe6427d8f8f00322b23b53dade1953ef898f9c3d65e8fb8c488\" returns successfully" Dec 13 01:59:29.160225 env[1212]: time="2024-12-13T01:59:29.159965724Z" level=info msg="shim disconnected" id=0c5fdd6eb6c93b3fc4e6ab8ce592570e292b4f260149c5b73c686b38d7d2c523 Dec 13 01:59:29.160289 env[1212]: time="2024-12-13T01:59:29.160233020Z" level=warning msg="cleaning up after shim disconnected" id=0c5fdd6eb6c93b3fc4e6ab8ce592570e292b4f260149c5b73c686b38d7d2c523 namespace=k8s.io Dec 13 01:59:29.160289 env[1212]: time="2024-12-13T01:59:29.160246086Z" level=info msg="cleaning up dead shim" Dec 13 01:59:29.176177 env[1212]: time="2024-12-13T01:59:29.176089061Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:59:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1942 runtime=io.containerd.runc.v2\n" Dec 13 01:59:29.283389 kubelet[1420]: E1213 01:59:29.283342 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:29.286488 kubelet[1420]: E1213 01:59:29.286434 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:29.288803 env[1212]: time="2024-12-13T01:59:29.288759617Z" level=info msg="CreateContainer within sandbox \"faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:59:29.294077 kubelet[1420]: I1213 01:59:29.294007 1420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ngrgk" podStartSLOduration=4.456543102 podStartE2EDuration="21.293953286s" podCreationTimestamp="2024-12-13 01:59:08 +0000 UTC" firstStartedPulling="2024-12-13 01:59:11.597739371 +0000 UTC m=+3.152249113" lastFinishedPulling="2024-12-13 01:59:28.435149565 +0000 UTC m=+19.989659297" observedRunningTime="2024-12-13 01:59:29.293737438 +0000 UTC m=+20.848247170" watchObservedRunningTime="2024-12-13 01:59:29.293953286 +0000 UTC m=+20.848463018" Dec 13 01:59:29.307909 env[1212]: time="2024-12-13T01:59:29.307828197Z" level=info msg="CreateContainer within sandbox \"faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b3de7478060731a7c4f723e891700bdb493de96ce344ff651fad3b49e0778613\"" Dec 13 01:59:29.308418 env[1212]: time="2024-12-13T01:59:29.308367139Z" level=info msg="StartContainer for \"b3de7478060731a7c4f723e891700bdb493de96ce344ff651fad3b49e0778613\"" Dec 13 01:59:29.331210 systemd[1]: Started cri-containerd-b3de7478060731a7c4f723e891700bdb493de96ce344ff651fad3b49e0778613.scope. Dec 13 01:59:29.363669 env[1212]: time="2024-12-13T01:59:29.363617179Z" level=info msg="StartContainer for \"b3de7478060731a7c4f723e891700bdb493de96ce344ff651fad3b49e0778613\" returns successfully" Dec 13 01:59:29.458455 kubelet[1420]: I1213 01:59:29.458330 1420 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:59:29.679603 kernel: Initializing XFRM netlink socket Dec 13 01:59:29.691018 systemd[1]: run-containerd-runc-k8s.io-b3de7478060731a7c4f723e891700bdb493de96ce344ff651fad3b49e0778613-runc.wbJUVE.mount: Deactivated successfully. Dec 13 01:59:29.765641 kubelet[1420]: E1213 01:59:29.765602 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:30.296253 kubelet[1420]: E1213 01:59:30.296212 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:30.296909 kubelet[1420]: E1213 01:59:30.296855 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:30.317693 kubelet[1420]: I1213 01:59:30.317635 1420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-v7cvd" podStartSLOduration=9.234362329 podStartE2EDuration="22.317584902s" podCreationTimestamp="2024-12-13 01:59:08 +0000 UTC" firstStartedPulling="2024-12-13 01:59:11.595072189 +0000 UTC m=+3.149581921" lastFinishedPulling="2024-12-13 01:59:24.678294772 +0000 UTC m=+16.232804494" observedRunningTime="2024-12-13 01:59:30.317326934 +0000 UTC m=+21.871836676" watchObservedRunningTime="2024-12-13 01:59:30.317584902 +0000 UTC m=+21.872094634" Dec 13 01:59:30.766640 kubelet[1420]: E1213 01:59:30.766596 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:31.223690 kubelet[1420]: I1213 01:59:31.223344 1420 topology_manager.go:215] "Topology Admit Handler" podUID="d83876a6-e9b1-408e-9ec0-69391a3f4ab8" podNamespace="default" podName="nginx-deployment-6d5f899847-7l6tx" Dec 13 01:59:31.229657 systemd[1]: Created slice kubepods-besteffort-podd83876a6_e9b1_408e_9ec0_69391a3f4ab8.slice. Dec 13 01:59:31.247611 kubelet[1420]: I1213 01:59:31.247524 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg85p\" (UniqueName: \"kubernetes.io/projected/d83876a6-e9b1-408e-9ec0-69391a3f4ab8-kube-api-access-bg85p\") pod \"nginx-deployment-6d5f899847-7l6tx\" (UID: \"d83876a6-e9b1-408e-9ec0-69391a3f4ab8\") " pod="default/nginx-deployment-6d5f899847-7l6tx" Dec 13 01:59:31.300676 kubelet[1420]: E1213 01:59:31.299456 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:31.481917 systemd-networkd[1032]: cilium_host: Link UP Dec 13 01:59:31.482075 systemd-networkd[1032]: cilium_net: Link UP Dec 13 01:59:31.482079 systemd-networkd[1032]: cilium_net: Gained carrier Dec 13 01:59:31.482259 systemd-networkd[1032]: cilium_host: Gained carrier Dec 13 01:59:31.494752 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 01:59:31.494966 systemd-networkd[1032]: cilium_host: Gained IPv6LL Dec 13 01:59:31.587177 env[1212]: time="2024-12-13T01:59:31.586600548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7l6tx,Uid:d83876a6-e9b1-408e-9ec0-69391a3f4ab8,Namespace:default,Attempt:0,}" Dec 13 01:59:31.767509 kubelet[1420]: E1213 01:59:31.766858 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:31.835074 systemd-networkd[1032]: cilium_vxlan: Link UP Dec 13 01:59:31.835086 systemd-networkd[1032]: cilium_vxlan: Gained carrier Dec 13 01:59:32.275627 kernel: NET: Registered PF_ALG protocol family Dec 13 01:59:32.301951 kubelet[1420]: E1213 01:59:32.301914 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:32.500745 systemd-networkd[1032]: cilium_net: Gained IPv6LL Dec 13 01:59:32.767209 kubelet[1420]: E1213 01:59:32.767148 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:32.854789 systemd-networkd[1032]: lxc_health: Link UP Dec 13 01:59:32.864623 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 01:59:32.864909 systemd-networkd[1032]: lxc_health: Gained carrier Dec 13 01:59:33.012800 systemd-networkd[1032]: cilium_vxlan: Gained IPv6LL Dec 13 01:59:33.192202 systemd-networkd[1032]: lxccf50f8a78040: Link UP Dec 13 01:59:33.200628 kernel: eth0: renamed from tmp5885c Dec 13 01:59:33.255643 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccf50f8a78040: link becomes ready Dec 13 01:59:33.256037 systemd-networkd[1032]: lxccf50f8a78040: Gained carrier Dec 13 01:59:33.767738 kubelet[1420]: E1213 01:59:33.767677 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:33.909788 systemd-networkd[1032]: lxc_health: Gained IPv6LL Dec 13 01:59:34.146892 kubelet[1420]: E1213 01:59:34.146764 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:34.305834 kubelet[1420]: E1213 01:59:34.305792 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:34.356826 systemd-networkd[1032]: lxccf50f8a78040: Gained IPv6LL Dec 13 01:59:34.768836 kubelet[1420]: E1213 01:59:34.768764 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:35.769284 kubelet[1420]: E1213 01:59:35.769213 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:36.770326 kubelet[1420]: E1213 01:59:36.770210 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:37.760975 env[1212]: time="2024-12-13T01:59:37.760845295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:59:37.760975 env[1212]: time="2024-12-13T01:59:37.760892935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:59:37.760975 env[1212]: time="2024-12-13T01:59:37.760907003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:59:37.761455 env[1212]: time="2024-12-13T01:59:37.761039966Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5885cb2ea2fc455abecd7969940bd2e9e4917b3b60b2a08b0a4e8b930462d975 pid=2499 runtime=io.containerd.runc.v2 Dec 13 01:59:37.771140 kubelet[1420]: E1213 01:59:37.771096 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:37.782041 systemd[1]: Started cri-containerd-5885cb2ea2fc455abecd7969940bd2e9e4917b3b60b2a08b0a4e8b930462d975.scope. Dec 13 01:59:37.794862 systemd-resolved[1149]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:59:37.820848 env[1212]: time="2024-12-13T01:59:37.820762002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7l6tx,Uid:d83876a6-e9b1-408e-9ec0-69391a3f4ab8,Namespace:default,Attempt:0,} returns sandbox id \"5885cb2ea2fc455abecd7969940bd2e9e4917b3b60b2a08b0a4e8b930462d975\"" Dec 13 01:59:37.822483 env[1212]: time="2024-12-13T01:59:37.822456247Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:59:38.771644 kubelet[1420]: E1213 01:59:38.771600 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:39.779455 kubelet[1420]: E1213 01:59:39.779320 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:40.779715 kubelet[1420]: E1213 01:59:40.779639 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:41.646297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3007677528.mount: Deactivated successfully. Dec 13 01:59:41.654667 update_engine[1205]: I1213 01:59:41.654604 1205 update_attempter.cc:509] Updating boot flags... Dec 13 01:59:41.780325 kubelet[1420]: E1213 01:59:41.780272 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:42.780706 kubelet[1420]: E1213 01:59:42.780643 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:43.624921 env[1212]: time="2024-12-13T01:59:43.624762634Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:43.629409 env[1212]: time="2024-12-13T01:59:43.629344283Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:43.632728 env[1212]: time="2024-12-13T01:59:43.632542446Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:43.636860 env[1212]: time="2024-12-13T01:59:43.636735798Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:43.637578 env[1212]: time="2024-12-13T01:59:43.637505730Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 01:59:43.644100 env[1212]: time="2024-12-13T01:59:43.643415491Z" level=info msg="CreateContainer within sandbox \"5885cb2ea2fc455abecd7969940bd2e9e4917b3b60b2a08b0a4e8b930462d975\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 01:59:43.681768 env[1212]: time="2024-12-13T01:59:43.679317038Z" level=info msg="CreateContainer within sandbox \"5885cb2ea2fc455abecd7969940bd2e9e4917b3b60b2a08b0a4e8b930462d975\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"096458ba289e7163c81b70a53a648b893ab59d8bebe64421867c0dae5a0c04e9\"" Dec 13 01:59:43.681768 env[1212]: time="2024-12-13T01:59:43.680406706Z" level=info msg="StartContainer for \"096458ba289e7163c81b70a53a648b893ab59d8bebe64421867c0dae5a0c04e9\"" Dec 13 01:59:43.776670 systemd[1]: Started cri-containerd-096458ba289e7163c81b70a53a648b893ab59d8bebe64421867c0dae5a0c04e9.scope. Dec 13 01:59:43.781699 kubelet[1420]: E1213 01:59:43.781640 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:43.831539 env[1212]: time="2024-12-13T01:59:43.831473641Z" level=info msg="StartContainer for \"096458ba289e7163c81b70a53a648b893ab59d8bebe64421867c0dae5a0c04e9\" returns successfully" Dec 13 01:59:44.381078 kubelet[1420]: I1213 01:59:44.381032 1420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-7l6tx" podStartSLOduration=7.561940314 podStartE2EDuration="13.380991757s" podCreationTimestamp="2024-12-13 01:59:31 +0000 UTC" firstStartedPulling="2024-12-13 01:59:37.822184659 +0000 UTC m=+29.376694391" lastFinishedPulling="2024-12-13 01:59:43.641236102 +0000 UTC m=+35.195745834" observedRunningTime="2024-12-13 01:59:44.380253656 +0000 UTC m=+35.934763388" watchObservedRunningTime="2024-12-13 01:59:44.380991757 +0000 UTC m=+35.935501489" Dec 13 01:59:44.781908 kubelet[1420]: E1213 01:59:44.781807 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:45.782217 kubelet[1420]: E1213 01:59:45.782146 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:46.782980 kubelet[1420]: E1213 01:59:46.782882 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:47.783945 kubelet[1420]: E1213 01:59:47.783874 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:48.751938 kubelet[1420]: E1213 01:59:48.751822 1420 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:48.784405 kubelet[1420]: E1213 01:59:48.784278 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:49.134609 kubelet[1420]: I1213 01:59:49.134442 1420 topology_manager.go:215] "Topology Admit Handler" podUID="a1d90a26-f6d7-4472-be6d-00a1b16398fe" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 01:59:49.139958 systemd[1]: Created slice kubepods-besteffort-poda1d90a26_f6d7_4472_be6d_00a1b16398fe.slice. Dec 13 01:59:49.328361 kubelet[1420]: I1213 01:59:49.328310 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/a1d90a26-f6d7-4472-be6d-00a1b16398fe-data\") pod \"nfs-server-provisioner-0\" (UID: \"a1d90a26-f6d7-4472-be6d-00a1b16398fe\") " pod="default/nfs-server-provisioner-0" Dec 13 01:59:49.328361 kubelet[1420]: I1213 01:59:49.328364 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srnsl\" (UniqueName: \"kubernetes.io/projected/a1d90a26-f6d7-4472-be6d-00a1b16398fe-kube-api-access-srnsl\") pod \"nfs-server-provisioner-0\" (UID: \"a1d90a26-f6d7-4472-be6d-00a1b16398fe\") " pod="default/nfs-server-provisioner-0" Dec 13 01:59:49.443173 env[1212]: time="2024-12-13T01:59:49.443060900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a1d90a26-f6d7-4472-be6d-00a1b16398fe,Namespace:default,Attempt:0,}" Dec 13 01:59:49.467846 systemd-networkd[1032]: lxc31181019832c: Link UP Dec 13 01:59:49.474617 kernel: eth0: renamed from tmp96803 Dec 13 01:59:49.483292 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:59:49.483365 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc31181019832c: link becomes ready Dec 13 01:59:49.483467 systemd-networkd[1032]: lxc31181019832c: Gained carrier Dec 13 01:59:49.616425 env[1212]: time="2024-12-13T01:59:49.616347523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:59:49.616425 env[1212]: time="2024-12-13T01:59:49.616388210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:59:49.616425 env[1212]: time="2024-12-13T01:59:49.616400834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:59:49.616665 env[1212]: time="2024-12-13T01:59:49.616578050Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9680322c9fc57e4508659c2fada1c5a7a44dfdfa8aa0753986a5e9db9ebc3c7e pid=2643 runtime=io.containerd.runc.v2 Dec 13 01:59:49.629116 systemd[1]: Started cri-containerd-9680322c9fc57e4508659c2fada1c5a7a44dfdfa8aa0753986a5e9db9ebc3c7e.scope. Dec 13 01:59:49.642809 systemd-resolved[1149]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:59:49.664992 env[1212]: time="2024-12-13T01:59:49.664934401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a1d90a26-f6d7-4472-be6d-00a1b16398fe,Namespace:default,Attempt:0,} returns sandbox id \"9680322c9fc57e4508659c2fada1c5a7a44dfdfa8aa0753986a5e9db9ebc3c7e\"" Dec 13 01:59:49.666523 env[1212]: time="2024-12-13T01:59:49.666498379Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 01:59:49.785281 kubelet[1420]: E1213 01:59:49.785223 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:50.785750 kubelet[1420]: E1213 01:59:50.785678 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:50.804726 systemd-networkd[1032]: lxc31181019832c: Gained IPv6LL Dec 13 01:59:51.786583 kubelet[1420]: E1213 01:59:51.786519 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:52.090305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157945672.mount: Deactivated successfully. Dec 13 01:59:52.786980 kubelet[1420]: E1213 01:59:52.786888 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:53.787195 kubelet[1420]: E1213 01:59:53.787109 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:54.787721 kubelet[1420]: E1213 01:59:54.787637 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:55.090413 env[1212]: time="2024-12-13T01:59:55.089875545Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:55.092712 env[1212]: time="2024-12-13T01:59:55.092637734Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:55.095055 env[1212]: time="2024-12-13T01:59:55.095010841Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:55.098232 env[1212]: time="2024-12-13T01:59:55.098173285Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:55.099186 env[1212]: time="2024-12-13T01:59:55.099148085Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 01:59:55.101993 env[1212]: time="2024-12-13T01:59:55.101939369Z" level=info msg="CreateContainer within sandbox \"9680322c9fc57e4508659c2fada1c5a7a44dfdfa8aa0753986a5e9db9ebc3c7e\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 01:59:55.123838 env[1212]: time="2024-12-13T01:59:55.123762871Z" level=info msg="CreateContainer within sandbox \"9680322c9fc57e4508659c2fada1c5a7a44dfdfa8aa0753986a5e9db9ebc3c7e\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4f39ca7992bb77f4a75045b7665b603dfc2aa3519531c5b7498dedac75efd898\"" Dec 13 01:59:55.124505 env[1212]: time="2024-12-13T01:59:55.124460437Z" level=info msg="StartContainer for \"4f39ca7992bb77f4a75045b7665b603dfc2aa3519531c5b7498dedac75efd898\"" Dec 13 01:59:55.152990 systemd[1]: Started cri-containerd-4f39ca7992bb77f4a75045b7665b603dfc2aa3519531c5b7498dedac75efd898.scope. Dec 13 01:59:55.183024 env[1212]: time="2024-12-13T01:59:55.182949885Z" level=info msg="StartContainer for \"4f39ca7992bb77f4a75045b7665b603dfc2aa3519531c5b7498dedac75efd898\" returns successfully" Dec 13 01:59:55.380621 kubelet[1420]: I1213 01:59:55.380450 1420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=0.94714846 podStartE2EDuration="6.380399791s" podCreationTimestamp="2024-12-13 01:59:49 +0000 UTC" firstStartedPulling="2024-12-13 01:59:49.666178695 +0000 UTC m=+41.220688427" lastFinishedPulling="2024-12-13 01:59:55.099430036 +0000 UTC m=+46.653939758" observedRunningTime="2024-12-13 01:59:55.37995932 +0000 UTC m=+46.934469072" watchObservedRunningTime="2024-12-13 01:59:55.380399791 +0000 UTC m=+46.934909533" Dec 13 01:59:55.788865 kubelet[1420]: E1213 01:59:55.788810 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:56.789384 kubelet[1420]: E1213 01:59:56.789318 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:57.789793 kubelet[1420]: E1213 01:59:57.789701 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:58.790352 kubelet[1420]: E1213 01:59:58.790251 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:59:59.790595 kubelet[1420]: E1213 01:59:59.790477 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:00.791756 kubelet[1420]: E1213 02:00:00.791600 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:01.792582 kubelet[1420]: E1213 02:00:01.792428 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:02.793478 kubelet[1420]: E1213 02:00:02.793411 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:03.794220 kubelet[1420]: E1213 02:00:03.793855 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:04.616853 kubelet[1420]: I1213 02:00:04.616793 1420 topology_manager.go:215] "Topology Admit Handler" podUID="7d43cb41-26f3-440d-a81d-db328d213832" podNamespace="default" podName="test-pod-1" Dec 13 02:00:04.622939 systemd[1]: Created slice kubepods-besteffort-pod7d43cb41_26f3_440d_a81d_db328d213832.slice. Dec 13 02:00:04.723787 kubelet[1420]: I1213 02:00:04.723733 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-de473ae2-2622-4d8e-87d4-89d5046c3807\" (UniqueName: \"kubernetes.io/nfs/7d43cb41-26f3-440d-a81d-db328d213832-pvc-de473ae2-2622-4d8e-87d4-89d5046c3807\") pod \"test-pod-1\" (UID: \"7d43cb41-26f3-440d-a81d-db328d213832\") " pod="default/test-pod-1" Dec 13 02:00:04.723787 kubelet[1420]: I1213 02:00:04.723781 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hdjr\" (UniqueName: \"kubernetes.io/projected/7d43cb41-26f3-440d-a81d-db328d213832-kube-api-access-7hdjr\") pod \"test-pod-1\" (UID: \"7d43cb41-26f3-440d-a81d-db328d213832\") " pod="default/test-pod-1" Dec 13 02:00:04.795009 kubelet[1420]: E1213 02:00:04.794931 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:04.847588 kernel: FS-Cache: Loaded Dec 13 02:00:04.894188 kernel: RPC: Registered named UNIX socket transport module. Dec 13 02:00:04.894335 kernel: RPC: Registered udp transport module. Dec 13 02:00:04.894358 kernel: RPC: Registered tcp transport module. Dec 13 02:00:04.896286 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 02:00:04.969581 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 02:00:05.185022 kernel: NFS: Registering the id_resolver key type Dec 13 02:00:05.185218 kernel: Key type id_resolver registered Dec 13 02:00:05.185318 kernel: Key type id_legacy registered Dec 13 02:00:05.210802 nfsidmap[2762]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 02:00:05.213677 nfsidmap[2765]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 02:00:05.225530 env[1212]: time="2024-12-13T02:00:05.225482825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7d43cb41-26f3-440d-a81d-db328d213832,Namespace:default,Attempt:0,}" Dec 13 02:00:05.261063 systemd-networkd[1032]: lxcdc3d7f0e05de: Link UP Dec 13 02:00:05.267967 kernel: eth0: renamed from tmpaeda3 Dec 13 02:00:05.277129 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:00:05.277176 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcdc3d7f0e05de: link becomes ready Dec 13 02:00:05.277308 systemd-networkd[1032]: lxcdc3d7f0e05de: Gained carrier Dec 13 02:00:05.446057 env[1212]: time="2024-12-13T02:00:05.445991704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:05.446222 env[1212]: time="2024-12-13T02:00:05.446031959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:05.446222 env[1212]: time="2024-12-13T02:00:05.446066714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:05.446318 env[1212]: time="2024-12-13T02:00:05.446290777Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aeda39d1d4f830d85ecdf05384e7c96ffc4c3eedf65df47c6d2017b5d9c6fb71 pid=2800 runtime=io.containerd.runc.v2 Dec 13 02:00:05.456473 systemd[1]: Started cri-containerd-aeda39d1d4f830d85ecdf05384e7c96ffc4c3eedf65df47c6d2017b5d9c6fb71.scope. Dec 13 02:00:05.468311 systemd-resolved[1149]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 02:00:05.488818 env[1212]: time="2024-12-13T02:00:05.488769569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7d43cb41-26f3-440d-a81d-db328d213832,Namespace:default,Attempt:0,} returns sandbox id \"aeda39d1d4f830d85ecdf05384e7c96ffc4c3eedf65df47c6d2017b5d9c6fb71\"" Dec 13 02:00:05.490706 env[1212]: time="2024-12-13T02:00:05.490668842Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 02:00:05.795809 kubelet[1420]: E1213 02:00:05.795750 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:06.612736 systemd-networkd[1032]: lxcdc3d7f0e05de: Gained IPv6LL Dec 13 02:00:06.796936 kubelet[1420]: E1213 02:00:06.796875 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:07.797990 kubelet[1420]: E1213 02:00:07.797911 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:08.751569 kubelet[1420]: E1213 02:00:08.751517 1420 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:08.798984 kubelet[1420]: E1213 02:00:08.798948 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:09.241271 env[1212]: time="2024-12-13T02:00:09.241225667Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:09.243295 env[1212]: time="2024-12-13T02:00:09.243246776Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:09.245026 env[1212]: time="2024-12-13T02:00:09.244988140Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:09.246847 env[1212]: time="2024-12-13T02:00:09.246817819Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:09.247532 env[1212]: time="2024-12-13T02:00:09.247497237Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 02:00:09.249005 env[1212]: time="2024-12-13T02:00:09.248981726Z" level=info msg="CreateContainer within sandbox \"aeda39d1d4f830d85ecdf05384e7c96ffc4c3eedf65df47c6d2017b5d9c6fb71\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 02:00:09.262072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3458810061.mount: Deactivated successfully. Dec 13 02:00:09.267285 env[1212]: time="2024-12-13T02:00:09.267230529Z" level=info msg="CreateContainer within sandbox \"aeda39d1d4f830d85ecdf05384e7c96ffc4c3eedf65df47c6d2017b5d9c6fb71\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"b2b47b354f9ee2350206cf98c051bad0021d78aad5da8905364273359e86a1f6\"" Dec 13 02:00:09.267763 env[1212]: time="2024-12-13T02:00:09.267721652Z" level=info msg="StartContainer for \"b2b47b354f9ee2350206cf98c051bad0021d78aad5da8905364273359e86a1f6\"" Dec 13 02:00:09.284684 systemd[1]: Started cri-containerd-b2b47b354f9ee2350206cf98c051bad0021d78aad5da8905364273359e86a1f6.scope. Dec 13 02:00:09.307885 env[1212]: time="2024-12-13T02:00:09.307830547Z" level=info msg="StartContainer for \"b2b47b354f9ee2350206cf98c051bad0021d78aad5da8905364273359e86a1f6\" returns successfully" Dec 13 02:00:09.412331 kubelet[1420]: I1213 02:00:09.412281 1420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.654750888 podStartE2EDuration="20.412236099s" podCreationTimestamp="2024-12-13 01:59:49 +0000 UTC" firstStartedPulling="2024-12-13 02:00:05.490234945 +0000 UTC m=+57.044744667" lastFinishedPulling="2024-12-13 02:00:09.247720146 +0000 UTC m=+60.802229878" observedRunningTime="2024-12-13 02:00:09.411827581 +0000 UTC m=+60.966337313" watchObservedRunningTime="2024-12-13 02:00:09.412236099 +0000 UTC m=+60.966745831" Dec 13 02:00:09.799674 kubelet[1420]: E1213 02:00:09.799617 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:10.799866 kubelet[1420]: E1213 02:00:10.799783 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:11.800524 kubelet[1420]: E1213 02:00:11.800471 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:11.985228 systemd[1]: run-containerd-runc-k8s.io-b3de7478060731a7c4f723e891700bdb493de96ce344ff651fad3b49e0778613-runc.AW3Ooj.mount: Deactivated successfully. Dec 13 02:00:11.999597 env[1212]: time="2024-12-13T02:00:11.999517424Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:00:12.004900 env[1212]: time="2024-12-13T02:00:12.004852930Z" level=info msg="StopContainer for \"b3de7478060731a7c4f723e891700bdb493de96ce344ff651fad3b49e0778613\" with timeout 2 (s)" Dec 13 02:00:12.005172 env[1212]: time="2024-12-13T02:00:12.005122206Z" level=info msg="Stop container \"b3de7478060731a7c4f723e891700bdb493de96ce344ff651fad3b49e0778613\" with signal terminated" Dec 13 02:00:12.010337 systemd-networkd[1032]: lxc_health: Link DOWN Dec 13 02:00:12.010346 systemd-networkd[1032]: lxc_health: Lost carrier Dec 13 02:00:12.052988 systemd[1]: cri-containerd-b3de7478060731a7c4f723e891700bdb493de96ce344ff651fad3b49e0778613.scope: Deactivated successfully. Dec 13 02:00:12.053331 systemd[1]: cri-containerd-b3de7478060731a7c4f723e891700bdb493de96ce344ff651fad3b49e0778613.scope: Consumed 7.858s CPU time. Dec 13 02:00:12.068942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3de7478060731a7c4f723e891700bdb493de96ce344ff651fad3b49e0778613-rootfs.mount: Deactivated successfully. Dec 13 02:00:12.324652 env[1212]: time="2024-12-13T02:00:12.324461650Z" level=info msg="shim disconnected" id=b3de7478060731a7c4f723e891700bdb493de96ce344ff651fad3b49e0778613 Dec 13 02:00:12.324652 env[1212]: time="2024-12-13T02:00:12.324512726Z" level=warning msg="cleaning up after shim disconnected" id=b3de7478060731a7c4f723e891700bdb493de96ce344ff651fad3b49e0778613 namespace=k8s.io Dec 13 02:00:12.324652 env[1212]: time="2024-12-13T02:00:12.324521111Z" level=info msg="cleaning up dead shim" Dec 13 02:00:12.331724 env[1212]: time="2024-12-13T02:00:12.331649064Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:00:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2932 runtime=io.containerd.runc.v2\n" Dec 13 02:00:12.349767 env[1212]: time="2024-12-13T02:00:12.349684181Z" level=info msg="StopContainer for \"b3de7478060731a7c4f723e891700bdb493de96ce344ff651fad3b49e0778613\" returns successfully" Dec 13 02:00:12.350462 env[1212]: time="2024-12-13T02:00:12.350430584Z" level=info msg="StopPodSandbox for \"faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3\"" Dec 13 02:00:12.350573 env[1212]: time="2024-12-13T02:00:12.350506466Z" level=info msg="Container to stop \"26d5a094602d557dc6df97bac96c0f108abd29ee7219cebda9a4dea1a56653f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:00:12.350573 env[1212]: time="2024-12-13T02:00:12.350539829Z" level=info msg="Container to stop \"b3de7478060731a7c4f723e891700bdb493de96ce344ff651fad3b49e0778613\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:00:12.350573 env[1212]: time="2024-12-13T02:00:12.350564775Z" level=info msg="Container to stop \"c9ee831d153e49cb9c482cbf445a05cac7eba9b83e6a3ad8d1a8c52e7113f411\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:00:12.350766 env[1212]: time="2024-12-13T02:00:12.350574434Z" level=info msg="Container to stop \"d5ea6a8c7facc77a0736d5566ad12d5d6efd37cacf0ec9bb473d5633c818d838\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:00:12.350766 env[1212]: time="2024-12-13T02:00:12.350587749Z" level=info msg="Container to stop \"0c5fdd6eb6c93b3fc4e6ab8ce592570e292b4f260149c5b73c686b38d7d2c523\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:00:12.352510 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3-shm.mount: Deactivated successfully. Dec 13 02:00:12.356257 systemd[1]: cri-containerd-faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3.scope: Deactivated successfully. Dec 13 02:00:12.408242 env[1212]: time="2024-12-13T02:00:12.408178650Z" level=info msg="shim disconnected" id=faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3 Dec 13 02:00:12.408242 env[1212]: time="2024-12-13T02:00:12.408238722Z" level=warning msg="cleaning up after shim disconnected" id=faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3 namespace=k8s.io Dec 13 02:00:12.408513 env[1212]: time="2024-12-13T02:00:12.408254121Z" level=info msg="cleaning up dead shim" Dec 13 02:00:12.414793 env[1212]: time="2024-12-13T02:00:12.414737714Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:00:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2962 runtime=io.containerd.runc.v2\n" Dec 13 02:00:12.415106 env[1212]: time="2024-12-13T02:00:12.415073745Z" level=info msg="TearDown network for sandbox \"faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3\" successfully" Dec 13 02:00:12.415106 env[1212]: time="2024-12-13T02:00:12.415100345Z" level=info msg="StopPodSandbox for \"faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3\" returns successfully" Dec 13 02:00:12.569075 kubelet[1420]: I1213 02:00:12.569031 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-xtables-lock\") pod \"c5c889e4-1374-4414-962b-b45d88f04d9b\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " Dec 13 02:00:12.569075 kubelet[1420]: I1213 02:00:12.569070 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-etc-cni-netd\") pod \"c5c889e4-1374-4414-962b-b45d88f04d9b\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " Dec 13 02:00:12.569075 kubelet[1420]: I1213 02:00:12.569099 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmcmm\" (UniqueName: \"kubernetes.io/projected/c5c889e4-1374-4414-962b-b45d88f04d9b-kube-api-access-kmcmm\") pod \"c5c889e4-1374-4414-962b-b45d88f04d9b\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " Dec 13 02:00:12.569334 kubelet[1420]: I1213 02:00:12.569119 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-cni-path\") pod \"c5c889e4-1374-4414-962b-b45d88f04d9b\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " Dec 13 02:00:12.569334 kubelet[1420]: I1213 02:00:12.569142 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-host-proc-sys-net\") pod \"c5c889e4-1374-4414-962b-b45d88f04d9b\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " Dec 13 02:00:12.569334 kubelet[1420]: I1213 02:00:12.569163 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-cilium-cgroup\") pod \"c5c889e4-1374-4414-962b-b45d88f04d9b\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " Dec 13 02:00:12.569334 kubelet[1420]: I1213 02:00:12.569184 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c5c889e4-1374-4414-962b-b45d88f04d9b-hubble-tls\") pod \"c5c889e4-1374-4414-962b-b45d88f04d9b\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " Dec 13 02:00:12.569334 kubelet[1420]: I1213 02:00:12.569205 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-bpf-maps\") pod \"c5c889e4-1374-4414-962b-b45d88f04d9b\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " Dec 13 02:00:12.569334 kubelet[1420]: I1213 02:00:12.569208 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-cni-path" (OuterVolumeSpecName: "cni-path") pod "c5c889e4-1374-4414-962b-b45d88f04d9b" (UID: "c5c889e4-1374-4414-962b-b45d88f04d9b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:00:12.569467 kubelet[1420]: I1213 02:00:12.569225 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-lib-modules\") pod \"c5c889e4-1374-4414-962b-b45d88f04d9b\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " Dec 13 02:00:12.569467 kubelet[1420]: I1213 02:00:12.569247 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-host-proc-sys-kernel\") pod \"c5c889e4-1374-4414-962b-b45d88f04d9b\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " Dec 13 02:00:12.569467 kubelet[1420]: I1213 02:00:12.569271 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5c889e4-1374-4414-962b-b45d88f04d9b-cilium-config-path\") pod \"c5c889e4-1374-4414-962b-b45d88f04d9b\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " Dec 13 02:00:12.569467 kubelet[1420]: I1213 02:00:12.569250 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c5c889e4-1374-4414-962b-b45d88f04d9b" (UID: "c5c889e4-1374-4414-962b-b45d88f04d9b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:00:12.569467 kubelet[1420]: I1213 02:00:12.569294 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c5c889e4-1374-4414-962b-b45d88f04d9b-clustermesh-secrets\") pod \"c5c889e4-1374-4414-962b-b45d88f04d9b\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " Dec 13 02:00:12.569467 kubelet[1420]: I1213 02:00:12.569387 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-hostproc\") pod \"c5c889e4-1374-4414-962b-b45d88f04d9b\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " Dec 13 02:00:12.569628 kubelet[1420]: I1213 02:00:12.569420 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-cilium-run\") pod \"c5c889e4-1374-4414-962b-b45d88f04d9b\" (UID: \"c5c889e4-1374-4414-962b-b45d88f04d9b\") " Dec 13 02:00:12.569628 kubelet[1420]: I1213 02:00:12.569472 1420 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-xtables-lock\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:12.569628 kubelet[1420]: I1213 02:00:12.569491 1420 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-cni-path\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:12.569628 kubelet[1420]: I1213 02:00:12.569516 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c5c889e4-1374-4414-962b-b45d88f04d9b" (UID: "c5c889e4-1374-4414-962b-b45d88f04d9b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:00:12.569628 kubelet[1420]: I1213 02:00:12.569542 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-hostproc" (OuterVolumeSpecName: "hostproc") pod "c5c889e4-1374-4414-962b-b45d88f04d9b" (UID: "c5c889e4-1374-4414-962b-b45d88f04d9b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:00:12.569628 kubelet[1420]: I1213 02:00:12.569600 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c5c889e4-1374-4414-962b-b45d88f04d9b" (UID: "c5c889e4-1374-4414-962b-b45d88f04d9b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:00:12.569774 kubelet[1420]: I1213 02:00:12.569640 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c5c889e4-1374-4414-962b-b45d88f04d9b" (UID: "c5c889e4-1374-4414-962b-b45d88f04d9b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:00:12.569774 kubelet[1420]: I1213 02:00:12.569671 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c5c889e4-1374-4414-962b-b45d88f04d9b" (UID: "c5c889e4-1374-4414-962b-b45d88f04d9b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:00:12.569774 kubelet[1420]: I1213 02:00:12.569713 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c5c889e4-1374-4414-962b-b45d88f04d9b" (UID: "c5c889e4-1374-4414-962b-b45d88f04d9b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:00:12.569774 kubelet[1420]: I1213 02:00:12.569742 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c5c889e4-1374-4414-962b-b45d88f04d9b" (UID: "c5c889e4-1374-4414-962b-b45d88f04d9b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:00:12.569868 kubelet[1420]: I1213 02:00:12.569777 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c5c889e4-1374-4414-962b-b45d88f04d9b" (UID: "c5c889e4-1374-4414-962b-b45d88f04d9b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:00:12.572066 kubelet[1420]: I1213 02:00:12.572044 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5c889e4-1374-4414-962b-b45d88f04d9b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c5c889e4-1374-4414-962b-b45d88f04d9b" (UID: "c5c889e4-1374-4414-962b-b45d88f04d9b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:00:12.572288 kubelet[1420]: I1213 02:00:12.572249 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5c889e4-1374-4414-962b-b45d88f04d9b-kube-api-access-kmcmm" (OuterVolumeSpecName: "kube-api-access-kmcmm") pod "c5c889e4-1374-4414-962b-b45d88f04d9b" (UID: "c5c889e4-1374-4414-962b-b45d88f04d9b"). InnerVolumeSpecName "kube-api-access-kmcmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:00:12.572428 kubelet[1420]: I1213 02:00:12.572398 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c889e4-1374-4414-962b-b45d88f04d9b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c5c889e4-1374-4414-962b-b45d88f04d9b" (UID: "c5c889e4-1374-4414-962b-b45d88f04d9b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:00:12.572584 kubelet[1420]: I1213 02:00:12.572523 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5c889e4-1374-4414-962b-b45d88f04d9b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c5c889e4-1374-4414-962b-b45d88f04d9b" (UID: "c5c889e4-1374-4414-962b-b45d88f04d9b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:00:12.669913 kubelet[1420]: I1213 02:00:12.669774 1420 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-hostproc\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:12.669913 kubelet[1420]: I1213 02:00:12.669825 1420 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-cilium-run\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:12.669913 kubelet[1420]: I1213 02:00:12.669844 1420 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c5c889e4-1374-4414-962b-b45d88f04d9b-clustermesh-secrets\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:12.669913 kubelet[1420]: I1213 02:00:12.669857 1420 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-etc-cni-netd\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:12.669913 kubelet[1420]: I1213 02:00:12.669874 1420 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kmcmm\" (UniqueName: \"kubernetes.io/projected/c5c889e4-1374-4414-962b-b45d88f04d9b-kube-api-access-kmcmm\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:12.669913 kubelet[1420]: I1213 02:00:12.669885 1420 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-cilium-cgroup\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:12.669913 kubelet[1420]: I1213 02:00:12.669919 1420 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c5c889e4-1374-4414-962b-b45d88f04d9b-hubble-tls\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:12.670243 kubelet[1420]: I1213 02:00:12.669934 1420 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-bpf-maps\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:12.670243 kubelet[1420]: I1213 02:00:12.669950 1420 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-lib-modules\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:12.670243 kubelet[1420]: I1213 02:00:12.669964 1420 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-host-proc-sys-net\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:12.670243 kubelet[1420]: I1213 02:00:12.669976 1420 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5c889e4-1374-4414-962b-b45d88f04d9b-cilium-config-path\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:12.670243 kubelet[1420]: I1213 02:00:12.669987 1420 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c5c889e4-1374-4414-962b-b45d88f04d9b-host-proc-sys-kernel\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:12.801584 kubelet[1420]: E1213 02:00:12.801476 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:12.981290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-faef6b0dbb32e774ac460803a3b5de16031a6daa53fa118c1aeed69b5f8710c3-rootfs.mount: Deactivated successfully. Dec 13 02:00:12.981372 systemd[1]: var-lib-kubelet-pods-c5c889e4\x2d1374\x2d4414\x2d962b\x2db45d88f04d9b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkmcmm.mount: Deactivated successfully. Dec 13 02:00:12.981427 systemd[1]: var-lib-kubelet-pods-c5c889e4\x2d1374\x2d4414\x2d962b\x2db45d88f04d9b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:00:12.981475 systemd[1]: var-lib-kubelet-pods-c5c889e4\x2d1374\x2d4414\x2d962b\x2db45d88f04d9b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:00:13.133692 systemd[1]: Removed slice kubepods-burstable-podc5c889e4_1374_4414_962b_b45d88f04d9b.slice. Dec 13 02:00:13.133790 systemd[1]: kubepods-burstable-podc5c889e4_1374_4414_962b_b45d88f04d9b.slice: Consumed 8.097s CPU time. Dec 13 02:00:13.414739 kubelet[1420]: I1213 02:00:13.414243 1420 scope.go:117] "RemoveContainer" containerID="b3de7478060731a7c4f723e891700bdb493de96ce344ff651fad3b49e0778613" Dec 13 02:00:13.416353 env[1212]: time="2024-12-13T02:00:13.415943760Z" level=info msg="RemoveContainer for \"b3de7478060731a7c4f723e891700bdb493de96ce344ff651fad3b49e0778613\"" Dec 13 02:00:13.467919 env[1212]: time="2024-12-13T02:00:13.467838937Z" level=info msg="RemoveContainer for \"b3de7478060731a7c4f723e891700bdb493de96ce344ff651fad3b49e0778613\" returns successfully" Dec 13 02:00:13.468365 kubelet[1420]: I1213 02:00:13.468323 1420 scope.go:117] "RemoveContainer" containerID="0c5fdd6eb6c93b3fc4e6ab8ce592570e292b4f260149c5b73c686b38d7d2c523" Dec 13 02:00:13.472469 env[1212]: time="2024-12-13T02:00:13.472402701Z" level=info msg="RemoveContainer for \"0c5fdd6eb6c93b3fc4e6ab8ce592570e292b4f260149c5b73c686b38d7d2c523\"" Dec 13 02:00:13.480154 env[1212]: time="2024-12-13T02:00:13.480073191Z" level=info msg="RemoveContainer for \"0c5fdd6eb6c93b3fc4e6ab8ce592570e292b4f260149c5b73c686b38d7d2c523\" returns successfully" Dec 13 02:00:13.480460 kubelet[1420]: I1213 02:00:13.480424 1420 scope.go:117] "RemoveContainer" containerID="d5ea6a8c7facc77a0736d5566ad12d5d6efd37cacf0ec9bb473d5633c818d838" Dec 13 02:00:13.481934 env[1212]: time="2024-12-13T02:00:13.481883182Z" level=info msg="RemoveContainer for \"d5ea6a8c7facc77a0736d5566ad12d5d6efd37cacf0ec9bb473d5633c818d838\"" Dec 13 02:00:13.487147 env[1212]: time="2024-12-13T02:00:13.487080206Z" level=info msg="RemoveContainer for \"d5ea6a8c7facc77a0736d5566ad12d5d6efd37cacf0ec9bb473d5633c818d838\" returns successfully" Dec 13 02:00:13.487386 kubelet[1420]: I1213 02:00:13.487346 1420 scope.go:117] "RemoveContainer" containerID="c9ee831d153e49cb9c482cbf445a05cac7eba9b83e6a3ad8d1a8c52e7113f411" Dec 13 02:00:13.488833 env[1212]: time="2024-12-13T02:00:13.488769179Z" level=info msg="RemoveContainer for \"c9ee831d153e49cb9c482cbf445a05cac7eba9b83e6a3ad8d1a8c52e7113f411\"" Dec 13 02:00:13.492540 env[1212]: time="2024-12-13T02:00:13.492481944Z" level=info msg="RemoveContainer for \"c9ee831d153e49cb9c482cbf445a05cac7eba9b83e6a3ad8d1a8c52e7113f411\" returns successfully" Dec 13 02:00:13.492828 kubelet[1420]: I1213 02:00:13.492788 1420 scope.go:117] "RemoveContainer" containerID="26d5a094602d557dc6df97bac96c0f108abd29ee7219cebda9a4dea1a56653f0" Dec 13 02:00:13.494362 env[1212]: time="2024-12-13T02:00:13.494329566Z" level=info msg="RemoveContainer for \"26d5a094602d557dc6df97bac96c0f108abd29ee7219cebda9a4dea1a56653f0\"" Dec 13 02:00:13.497408 env[1212]: time="2024-12-13T02:00:13.497356974Z" level=info msg="RemoveContainer for \"26d5a094602d557dc6df97bac96c0f108abd29ee7219cebda9a4dea1a56653f0\" returns successfully" Dec 13 02:00:13.802679 kubelet[1420]: E1213 02:00:13.802612 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:14.117187 kubelet[1420]: E1213 02:00:14.117016 1420 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:00:14.803682 kubelet[1420]: E1213 02:00:14.803604 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:15.131071 kubelet[1420]: I1213 02:00:15.130962 1420 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c5c889e4-1374-4414-962b-b45d88f04d9b" path="/var/lib/kubelet/pods/c5c889e4-1374-4414-962b-b45d88f04d9b/volumes" Dec 13 02:00:15.473705 kubelet[1420]: I1213 02:00:15.473507 1420 topology_manager.go:215] "Topology Admit Handler" podUID="1170452f-d5d8-442b-aff6-c05938980a77" podNamespace="kube-system" podName="cilium-k7hgc" Dec 13 02:00:15.473705 kubelet[1420]: E1213 02:00:15.473596 1420 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5c889e4-1374-4414-962b-b45d88f04d9b" containerName="apply-sysctl-overwrites" Dec 13 02:00:15.473705 kubelet[1420]: E1213 02:00:15.473605 1420 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5c889e4-1374-4414-962b-b45d88f04d9b" containerName="mount-bpf-fs" Dec 13 02:00:15.473705 kubelet[1420]: E1213 02:00:15.473611 1420 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5c889e4-1374-4414-962b-b45d88f04d9b" containerName="clean-cilium-state" Dec 13 02:00:15.473705 kubelet[1420]: E1213 02:00:15.473617 1420 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5c889e4-1374-4414-962b-b45d88f04d9b" containerName="mount-cgroup" Dec 13 02:00:15.473705 kubelet[1420]: E1213 02:00:15.473622 1420 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5c889e4-1374-4414-962b-b45d88f04d9b" containerName="cilium-agent" Dec 13 02:00:15.473705 kubelet[1420]: I1213 02:00:15.473651 1420 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5c889e4-1374-4414-962b-b45d88f04d9b" containerName="cilium-agent" Dec 13 02:00:15.477953 kubelet[1420]: I1213 02:00:15.477931 1420 topology_manager.go:215] "Topology Admit Handler" podUID="bf8a2888-a86d-409b-a635-dd84bd419fe6" podNamespace="kube-system" podName="cilium-operator-5cc964979-cjntg" Dec 13 02:00:15.478205 systemd[1]: Created slice kubepods-burstable-pod1170452f_d5d8_442b_aff6_c05938980a77.slice. Dec 13 02:00:15.488172 systemd[1]: Created slice kubepods-besteffort-podbf8a2888_a86d_409b_a635_dd84bd419fe6.slice. Dec 13 02:00:15.585270 kubelet[1420]: I1213 02:00:15.585196 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-host-proc-sys-net\") pod \"cilium-k7hgc\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " pod="kube-system/cilium-k7hgc" Dec 13 02:00:15.585270 kubelet[1420]: I1213 02:00:15.585268 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m8c8\" (UniqueName: \"kubernetes.io/projected/bf8a2888-a86d-409b-a635-dd84bd419fe6-kube-api-access-5m8c8\") pod \"cilium-operator-5cc964979-cjntg\" (UID: \"bf8a2888-a86d-409b-a635-dd84bd419fe6\") " pod="kube-system/cilium-operator-5cc964979-cjntg" Dec 13 02:00:15.585492 kubelet[1420]: I1213 02:00:15.585303 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1170452f-d5d8-442b-aff6-c05938980a77-clustermesh-secrets\") pod \"cilium-k7hgc\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " pod="kube-system/cilium-k7hgc" Dec 13 02:00:15.585492 kubelet[1420]: I1213 02:00:15.585357 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-bpf-maps\") pod \"cilium-k7hgc\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " pod="kube-system/cilium-k7hgc" Dec 13 02:00:15.585492 kubelet[1420]: I1213 02:00:15.585410 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-hostproc\") pod \"cilium-k7hgc\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " pod="kube-system/cilium-k7hgc" Dec 13 02:00:15.585672 kubelet[1420]: I1213 02:00:15.585492 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-cilium-cgroup\") pod \"cilium-k7hgc\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " pod="kube-system/cilium-k7hgc" Dec 13 02:00:15.585672 kubelet[1420]: I1213 02:00:15.585530 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-cni-path\") pod \"cilium-k7hgc\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " pod="kube-system/cilium-k7hgc" Dec 13 02:00:15.585672 kubelet[1420]: I1213 02:00:15.585574 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1170452f-d5d8-442b-aff6-c05938980a77-cilium-ipsec-secrets\") pod \"cilium-k7hgc\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " pod="kube-system/cilium-k7hgc" Dec 13 02:00:15.585672 kubelet[1420]: I1213 02:00:15.585657 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-cilium-run\") pod \"cilium-k7hgc\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " pod="kube-system/cilium-k7hgc" Dec 13 02:00:15.585799 kubelet[1420]: I1213 02:00:15.585694 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-lib-modules\") pod \"cilium-k7hgc\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " pod="kube-system/cilium-k7hgc" Dec 13 02:00:15.585799 kubelet[1420]: I1213 02:00:15.585726 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-xtables-lock\") pod \"cilium-k7hgc\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " pod="kube-system/cilium-k7hgc" Dec 13 02:00:15.585799 kubelet[1420]: I1213 02:00:15.585748 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf8a2888-a86d-409b-a635-dd84bd419fe6-cilium-config-path\") pod \"cilium-operator-5cc964979-cjntg\" (UID: \"bf8a2888-a86d-409b-a635-dd84bd419fe6\") " pod="kube-system/cilium-operator-5cc964979-cjntg" Dec 13 02:00:15.585799 kubelet[1420]: I1213 02:00:15.585784 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-etc-cni-netd\") pod \"cilium-k7hgc\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " pod="kube-system/cilium-k7hgc" Dec 13 02:00:15.585917 kubelet[1420]: I1213 02:00:15.585809 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-host-proc-sys-kernel\") pod \"cilium-k7hgc\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " pod="kube-system/cilium-k7hgc" Dec 13 02:00:15.585917 kubelet[1420]: I1213 02:00:15.585829 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1170452f-d5d8-442b-aff6-c05938980a77-hubble-tls\") pod \"cilium-k7hgc\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " pod="kube-system/cilium-k7hgc" Dec 13 02:00:15.585917 kubelet[1420]: I1213 02:00:15.585852 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1170452f-d5d8-442b-aff6-c05938980a77-cilium-config-path\") pod \"cilium-k7hgc\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " pod="kube-system/cilium-k7hgc" Dec 13 02:00:15.585917 kubelet[1420]: I1213 02:00:15.585871 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjfxw\" (UniqueName: \"kubernetes.io/projected/1170452f-d5d8-442b-aff6-c05938980a77-kube-api-access-rjfxw\") pod \"cilium-k7hgc\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " pod="kube-system/cilium-k7hgc" Dec 13 02:00:15.606829 kubelet[1420]: E1213 02:00:15.606758 1420 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-rjfxw lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-k7hgc" podUID="1170452f-d5d8-442b-aff6-c05938980a77" Dec 13 02:00:15.790205 kubelet[1420]: E1213 02:00:15.790142 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:15.790851 env[1212]: time="2024-12-13T02:00:15.790797572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-cjntg,Uid:bf8a2888-a86d-409b-a635-dd84bd419fe6,Namespace:kube-system,Attempt:0,}" Dec 13 02:00:15.803758 env[1212]: time="2024-12-13T02:00:15.803703182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:15.803758 env[1212]: time="2024-12-13T02:00:15.803739010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:15.803758 env[1212]: time="2024-12-13T02:00:15.803749098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:15.803915 env[1212]: time="2024-12-13T02:00:15.803866470Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c87bfab76b955de9d6f5aa21e9bee98015cf2ac12b643b7edecd600231d7803 pid=2990 runtime=io.containerd.runc.v2 Dec 13 02:00:15.804495 kubelet[1420]: E1213 02:00:15.804472 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:15.813483 systemd[1]: Started cri-containerd-5c87bfab76b955de9d6f5aa21e9bee98015cf2ac12b643b7edecd600231d7803.scope. Dec 13 02:00:15.844566 env[1212]: time="2024-12-13T02:00:15.844508669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-cjntg,Uid:bf8a2888-a86d-409b-a635-dd84bd419fe6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c87bfab76b955de9d6f5aa21e9bee98015cf2ac12b643b7edecd600231d7803\"" Dec 13 02:00:15.845303 kubelet[1420]: E1213 02:00:15.845280 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:15.846301 env[1212]: time="2024-12-13T02:00:15.846278443Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:00:16.595499 kubelet[1420]: I1213 02:00:16.595411 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-etc-cni-netd\") pod \"1170452f-d5d8-442b-aff6-c05938980a77\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " Dec 13 02:00:16.595499 kubelet[1420]: I1213 02:00:16.595473 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-host-proc-sys-net\") pod \"1170452f-d5d8-442b-aff6-c05938980a77\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " Dec 13 02:00:16.595499 kubelet[1420]: I1213 02:00:16.595514 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjfxw\" (UniqueName: \"kubernetes.io/projected/1170452f-d5d8-442b-aff6-c05938980a77-kube-api-access-rjfxw\") pod \"1170452f-d5d8-442b-aff6-c05938980a77\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " Dec 13 02:00:16.595866 kubelet[1420]: I1213 02:00:16.595544 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1170452f-d5d8-442b-aff6-c05938980a77-cilium-ipsec-secrets\") pod \"1170452f-d5d8-442b-aff6-c05938980a77\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " Dec 13 02:00:16.595866 kubelet[1420]: I1213 02:00:16.595593 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-xtables-lock\") pod \"1170452f-d5d8-442b-aff6-c05938980a77\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " Dec 13 02:00:16.595866 kubelet[1420]: I1213 02:00:16.595627 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-host-proc-sys-kernel\") pod \"1170452f-d5d8-442b-aff6-c05938980a77\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " Dec 13 02:00:16.595866 kubelet[1420]: I1213 02:00:16.595651 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1170452f-d5d8-442b-aff6-c05938980a77-hubble-tls\") pod \"1170452f-d5d8-442b-aff6-c05938980a77\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " Dec 13 02:00:16.595866 kubelet[1420]: I1213 02:00:16.595673 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-cni-path\") pod \"1170452f-d5d8-442b-aff6-c05938980a77\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " Dec 13 02:00:16.595866 kubelet[1420]: I1213 02:00:16.595698 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1170452f-d5d8-442b-aff6-c05938980a77-cilium-config-path\") pod \"1170452f-d5d8-442b-aff6-c05938980a77\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " Dec 13 02:00:16.596070 kubelet[1420]: I1213 02:00:16.595689 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1170452f-d5d8-442b-aff6-c05938980a77" (UID: "1170452f-d5d8-442b-aff6-c05938980a77"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:00:16.596070 kubelet[1420]: I1213 02:00:16.595724 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1170452f-d5d8-442b-aff6-c05938980a77-clustermesh-secrets\") pod \"1170452f-d5d8-442b-aff6-c05938980a77\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " Dec 13 02:00:16.596070 kubelet[1420]: I1213 02:00:16.595805 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-lib-modules\") pod \"1170452f-d5d8-442b-aff6-c05938980a77\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " Dec 13 02:00:16.596070 kubelet[1420]: I1213 02:00:16.595841 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-cilium-cgroup\") pod \"1170452f-d5d8-442b-aff6-c05938980a77\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " Dec 13 02:00:16.596070 kubelet[1420]: I1213 02:00:16.595869 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-cilium-run\") pod \"1170452f-d5d8-442b-aff6-c05938980a77\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " Dec 13 02:00:16.596070 kubelet[1420]: I1213 02:00:16.595895 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-hostproc\") pod \"1170452f-d5d8-442b-aff6-c05938980a77\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " Dec 13 02:00:16.596299 kubelet[1420]: I1213 02:00:16.595921 1420 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-bpf-maps\") pod \"1170452f-d5d8-442b-aff6-c05938980a77\" (UID: \"1170452f-d5d8-442b-aff6-c05938980a77\") " Dec 13 02:00:16.596299 kubelet[1420]: I1213 02:00:16.595973 1420 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-host-proc-sys-net\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:16.596299 kubelet[1420]: I1213 02:00:16.596001 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1170452f-d5d8-442b-aff6-c05938980a77" (UID: "1170452f-d5d8-442b-aff6-c05938980a77"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:00:16.596299 kubelet[1420]: I1213 02:00:16.596025 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1170452f-d5d8-442b-aff6-c05938980a77" (UID: "1170452f-d5d8-442b-aff6-c05938980a77"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:00:16.596299 kubelet[1420]: I1213 02:00:16.596044 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1170452f-d5d8-442b-aff6-c05938980a77" (UID: "1170452f-d5d8-442b-aff6-c05938980a77"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:00:16.596471 kubelet[1420]: I1213 02:00:16.596061 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1170452f-d5d8-442b-aff6-c05938980a77" (UID: "1170452f-d5d8-442b-aff6-c05938980a77"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:00:16.596471 kubelet[1420]: I1213 02:00:16.596079 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1170452f-d5d8-442b-aff6-c05938980a77" (UID: "1170452f-d5d8-442b-aff6-c05938980a77"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:00:16.596471 kubelet[1420]: I1213 02:00:16.596100 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-hostproc" (OuterVolumeSpecName: "hostproc") pod "1170452f-d5d8-442b-aff6-c05938980a77" (UID: "1170452f-d5d8-442b-aff6-c05938980a77"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:00:16.596471 kubelet[1420]: I1213 02:00:16.596130 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1170452f-d5d8-442b-aff6-c05938980a77" (UID: "1170452f-d5d8-442b-aff6-c05938980a77"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:00:16.599327 kubelet[1420]: I1213 02:00:16.597777 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1170452f-d5d8-442b-aff6-c05938980a77" (UID: "1170452f-d5d8-442b-aff6-c05938980a77"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:00:16.599327 kubelet[1420]: I1213 02:00:16.597863 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-cni-path" (OuterVolumeSpecName: "cni-path") pod "1170452f-d5d8-442b-aff6-c05938980a77" (UID: "1170452f-d5d8-442b-aff6-c05938980a77"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:00:16.600303 kubelet[1420]: I1213 02:00:16.600263 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1170452f-d5d8-442b-aff6-c05938980a77-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1170452f-d5d8-442b-aff6-c05938980a77" (UID: "1170452f-d5d8-442b-aff6-c05938980a77"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:00:16.604795 kubelet[1420]: I1213 02:00:16.604665 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1170452f-d5d8-442b-aff6-c05938980a77-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1170452f-d5d8-442b-aff6-c05938980a77" (UID: "1170452f-d5d8-442b-aff6-c05938980a77"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:00:16.606540 kubelet[1420]: I1213 02:00:16.606463 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1170452f-d5d8-442b-aff6-c05938980a77-kube-api-access-rjfxw" (OuterVolumeSpecName: "kube-api-access-rjfxw") pod "1170452f-d5d8-442b-aff6-c05938980a77" (UID: "1170452f-d5d8-442b-aff6-c05938980a77"). InnerVolumeSpecName "kube-api-access-rjfxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:00:16.611914 kubelet[1420]: I1213 02:00:16.611832 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1170452f-d5d8-442b-aff6-c05938980a77-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1170452f-d5d8-442b-aff6-c05938980a77" (UID: "1170452f-d5d8-442b-aff6-c05938980a77"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:00:16.611914 kubelet[1420]: I1213 02:00:16.611882 1420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1170452f-d5d8-442b-aff6-c05938980a77-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1170452f-d5d8-442b-aff6-c05938980a77" (UID: "1170452f-d5d8-442b-aff6-c05938980a77"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:00:16.696911 kubelet[1420]: I1213 02:00:16.696865 1420 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-bpf-maps\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:16.697142 kubelet[1420]: I1213 02:00:16.697124 1420 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-etc-cni-netd\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:16.697248 kubelet[1420]: I1213 02:00:16.697228 1420 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rjfxw\" (UniqueName: \"kubernetes.io/projected/1170452f-d5d8-442b-aff6-c05938980a77-kube-api-access-rjfxw\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:16.697362 kubelet[1420]: I1213 02:00:16.697345 1420 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1170452f-d5d8-442b-aff6-c05938980a77-cilium-ipsec-secrets\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:16.697470 kubelet[1420]: I1213 02:00:16.697452 1420 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-xtables-lock\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:16.697607 kubelet[1420]: I1213 02:00:16.697577 1420 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-host-proc-sys-kernel\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:16.697712 kubelet[1420]: I1213 02:00:16.697694 1420 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1170452f-d5d8-442b-aff6-c05938980a77-hubble-tls\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:16.697816 kubelet[1420]: I1213 02:00:16.697799 1420 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-cni-path\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:16.697938 kubelet[1420]: I1213 02:00:16.697919 1420 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1170452f-d5d8-442b-aff6-c05938980a77-cilium-config-path\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:16.698059 kubelet[1420]: I1213 02:00:16.698042 1420 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1170452f-d5d8-442b-aff6-c05938980a77-clustermesh-secrets\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:16.698156 kubelet[1420]: I1213 02:00:16.698140 1420 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-lib-modules\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:16.698254 kubelet[1420]: I1213 02:00:16.698237 1420 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-cilium-cgroup\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:16.698350 kubelet[1420]: I1213 02:00:16.698333 1420 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-cilium-run\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:16.698444 kubelet[1420]: I1213 02:00:16.698428 1420 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1170452f-d5d8-442b-aff6-c05938980a77-hostproc\") on node \"10.0.0.55\" DevicePath \"\"" Dec 13 02:00:16.702074 systemd[1]: var-lib-kubelet-pods-1170452f\x2dd5d8\x2d442b\x2daff6\x2dc05938980a77-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:00:16.702181 systemd[1]: var-lib-kubelet-pods-1170452f\x2dd5d8\x2d442b\x2daff6\x2dc05938980a77-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drjfxw.mount: Deactivated successfully. Dec 13 02:00:16.702253 systemd[1]: var-lib-kubelet-pods-1170452f\x2dd5d8\x2d442b\x2daff6\x2dc05938980a77-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:00:16.702318 systemd[1]: var-lib-kubelet-pods-1170452f\x2dd5d8\x2d442b\x2daff6\x2dc05938980a77-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 02:00:16.805799 kubelet[1420]: E1213 02:00:16.805728 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:17.143661 systemd[1]: Removed slice kubepods-burstable-pod1170452f_d5d8_442b_aff6_c05938980a77.slice. Dec 13 02:00:17.523611 kubelet[1420]: I1213 02:00:17.522736 1420 topology_manager.go:215] "Topology Admit Handler" podUID="fa3b26c8-48d7-4248-b4a7-43b5f10393f1" podNamespace="kube-system" podName="cilium-b5bjn" Dec 13 02:00:17.534914 systemd[1]: Created slice kubepods-burstable-podfa3b26c8_48d7_4248_b4a7_43b5f10393f1.slice. Dec 13 02:00:17.707202 kubelet[1420]: I1213 02:00:17.707120 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa3b26c8-48d7-4248-b4a7-43b5f10393f1-lib-modules\") pod \"cilium-b5bjn\" (UID: \"fa3b26c8-48d7-4248-b4a7-43b5f10393f1\") " pod="kube-system/cilium-b5bjn" Dec 13 02:00:17.707202 kubelet[1420]: I1213 02:00:17.707188 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fa3b26c8-48d7-4248-b4a7-43b5f10393f1-cilium-run\") pod \"cilium-b5bjn\" (UID: \"fa3b26c8-48d7-4248-b4a7-43b5f10393f1\") " pod="kube-system/cilium-b5bjn" Dec 13 02:00:17.707202 kubelet[1420]: I1213 02:00:17.707212 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa3b26c8-48d7-4248-b4a7-43b5f10393f1-etc-cni-netd\") pod \"cilium-b5bjn\" (UID: \"fa3b26c8-48d7-4248-b4a7-43b5f10393f1\") " pod="kube-system/cilium-b5bjn" Dec 13 02:00:17.707523 kubelet[1420]: I1213 02:00:17.707242 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fa3b26c8-48d7-4248-b4a7-43b5f10393f1-hubble-tls\") pod \"cilium-b5bjn\" (UID: \"fa3b26c8-48d7-4248-b4a7-43b5f10393f1\") " pod="kube-system/cilium-b5bjn" Dec 13 02:00:17.707523 kubelet[1420]: I1213 02:00:17.707271 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fa3b26c8-48d7-4248-b4a7-43b5f10393f1-cilium-cgroup\") pod \"cilium-b5bjn\" (UID: \"fa3b26c8-48d7-4248-b4a7-43b5f10393f1\") " pod="kube-system/cilium-b5bjn" Dec 13 02:00:17.707523 kubelet[1420]: I1213 02:00:17.707298 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fa3b26c8-48d7-4248-b4a7-43b5f10393f1-clustermesh-secrets\") pod \"cilium-b5bjn\" (UID: \"fa3b26c8-48d7-4248-b4a7-43b5f10393f1\") " pod="kube-system/cilium-b5bjn" Dec 13 02:00:17.707523 kubelet[1420]: I1213 02:00:17.707326 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fa3b26c8-48d7-4248-b4a7-43b5f10393f1-cilium-ipsec-secrets\") pod \"cilium-b5bjn\" (UID: \"fa3b26c8-48d7-4248-b4a7-43b5f10393f1\") " pod="kube-system/cilium-b5bjn" Dec 13 02:00:17.707523 kubelet[1420]: I1213 02:00:17.707354 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fa3b26c8-48d7-4248-b4a7-43b5f10393f1-host-proc-sys-kernel\") pod \"cilium-b5bjn\" (UID: \"fa3b26c8-48d7-4248-b4a7-43b5f10393f1\") " pod="kube-system/cilium-b5bjn" Dec 13 02:00:17.707523 kubelet[1420]: I1213 02:00:17.707379 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fa3b26c8-48d7-4248-b4a7-43b5f10393f1-hostproc\") pod \"cilium-b5bjn\" (UID: \"fa3b26c8-48d7-4248-b4a7-43b5f10393f1\") " pod="kube-system/cilium-b5bjn" Dec 13 02:00:17.707793 kubelet[1420]: I1213 02:00:17.707401 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fa3b26c8-48d7-4248-b4a7-43b5f10393f1-cni-path\") pod \"cilium-b5bjn\" (UID: \"fa3b26c8-48d7-4248-b4a7-43b5f10393f1\") " pod="kube-system/cilium-b5bjn" Dec 13 02:00:17.707793 kubelet[1420]: I1213 02:00:17.707423 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa3b26c8-48d7-4248-b4a7-43b5f10393f1-xtables-lock\") pod \"cilium-b5bjn\" (UID: \"fa3b26c8-48d7-4248-b4a7-43b5f10393f1\") " pod="kube-system/cilium-b5bjn" Dec 13 02:00:17.707793 kubelet[1420]: I1213 02:00:17.707446 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa3b26c8-48d7-4248-b4a7-43b5f10393f1-cilium-config-path\") pod \"cilium-b5bjn\" (UID: \"fa3b26c8-48d7-4248-b4a7-43b5f10393f1\") " pod="kube-system/cilium-b5bjn" Dec 13 02:00:17.707793 kubelet[1420]: I1213 02:00:17.707471 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fhqs\" (UniqueName: \"kubernetes.io/projected/fa3b26c8-48d7-4248-b4a7-43b5f10393f1-kube-api-access-2fhqs\") pod \"cilium-b5bjn\" (UID: \"fa3b26c8-48d7-4248-b4a7-43b5f10393f1\") " pod="kube-system/cilium-b5bjn" Dec 13 02:00:17.707793 kubelet[1420]: I1213 02:00:17.707509 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fa3b26c8-48d7-4248-b4a7-43b5f10393f1-bpf-maps\") pod \"cilium-b5bjn\" (UID: \"fa3b26c8-48d7-4248-b4a7-43b5f10393f1\") " pod="kube-system/cilium-b5bjn" Dec 13 02:00:17.707793 kubelet[1420]: I1213 02:00:17.707533 1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fa3b26c8-48d7-4248-b4a7-43b5f10393f1-host-proc-sys-net\") pod \"cilium-b5bjn\" (UID: \"fa3b26c8-48d7-4248-b4a7-43b5f10393f1\") " pod="kube-system/cilium-b5bjn" Dec 13 02:00:17.806453 kubelet[1420]: E1213 02:00:17.806289 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:18.157921 kubelet[1420]: E1213 02:00:18.157765 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:18.159218 env[1212]: time="2024-12-13T02:00:18.158682465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b5bjn,Uid:fa3b26c8-48d7-4248-b4a7-43b5f10393f1,Namespace:kube-system,Attempt:0,}" Dec 13 02:00:18.267811 env[1212]: time="2024-12-13T02:00:18.262307583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:18.267811 env[1212]: time="2024-12-13T02:00:18.262360632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:18.267811 env[1212]: time="2024-12-13T02:00:18.262375951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:18.267811 env[1212]: time="2024-12-13T02:00:18.262662319Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/924ec13c49b277721667d12b50d616e1bfd81aaca2c75be786741ee7c8763218 pid=3039 runtime=io.containerd.runc.v2 Dec 13 02:00:18.291574 systemd[1]: Started cri-containerd-924ec13c49b277721667d12b50d616e1bfd81aaca2c75be786741ee7c8763218.scope. Dec 13 02:00:18.330870 env[1212]: time="2024-12-13T02:00:18.330778156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b5bjn,Uid:fa3b26c8-48d7-4248-b4a7-43b5f10393f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"924ec13c49b277721667d12b50d616e1bfd81aaca2c75be786741ee7c8763218\"" Dec 13 02:00:18.331798 kubelet[1420]: E1213 02:00:18.331765 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:18.335217 env[1212]: time="2024-12-13T02:00:18.335140358Z" level=info msg="CreateContainer within sandbox \"924ec13c49b277721667d12b50d616e1bfd81aaca2c75be786741ee7c8763218\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:00:18.679504 env[1212]: time="2024-12-13T02:00:18.679360393Z" level=info msg="CreateContainer within sandbox \"924ec13c49b277721667d12b50d616e1bfd81aaca2c75be786741ee7c8763218\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b9c7da6b17cc2c663c32d684922996ad48b359c63b59c986dccc091bae2f3e17\"" Dec 13 02:00:18.682060 env[1212]: time="2024-12-13T02:00:18.681972467Z" level=info msg="StartContainer for \"b9c7da6b17cc2c663c32d684922996ad48b359c63b59c986dccc091bae2f3e17\"" Dec 13 02:00:18.729929 systemd[1]: Started cri-containerd-b9c7da6b17cc2c663c32d684922996ad48b359c63b59c986dccc091bae2f3e17.scope. Dec 13 02:00:18.790250 env[1212]: time="2024-12-13T02:00:18.790151167Z" level=info msg="StartContainer for \"b9c7da6b17cc2c663c32d684922996ad48b359c63b59c986dccc091bae2f3e17\" returns successfully" Dec 13 02:00:18.815187 kubelet[1420]: E1213 02:00:18.815111 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:18.819303 systemd[1]: cri-containerd-b9c7da6b17cc2c663c32d684922996ad48b359c63b59c986dccc091bae2f3e17.scope: Deactivated successfully. Dec 13 02:00:18.880182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9c7da6b17cc2c663c32d684922996ad48b359c63b59c986dccc091bae2f3e17-rootfs.mount: Deactivated successfully. Dec 13 02:00:18.907164 env[1212]: time="2024-12-13T02:00:18.907069345Z" level=info msg="shim disconnected" id=b9c7da6b17cc2c663c32d684922996ad48b359c63b59c986dccc091bae2f3e17 Dec 13 02:00:18.907164 env[1212]: time="2024-12-13T02:00:18.907145909Z" level=warning msg="cleaning up after shim disconnected" id=b9c7da6b17cc2c663c32d684922996ad48b359c63b59c986dccc091bae2f3e17 namespace=k8s.io Dec 13 02:00:18.907164 env[1212]: time="2024-12-13T02:00:18.907160767Z" level=info msg="cleaning up dead shim" Dec 13 02:00:18.926028 env[1212]: time="2024-12-13T02:00:18.925944809Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:00:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3122 runtime=io.containerd.runc.v2\n" Dec 13 02:00:19.121607 kubelet[1420]: E1213 02:00:19.118532 1420 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:00:19.131735 kubelet[1420]: I1213 02:00:19.131642 1420 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1170452f-d5d8-442b-aff6-c05938980a77" path="/var/lib/kubelet/pods/1170452f-d5d8-442b-aff6-c05938980a77/volumes" Dec 13 02:00:19.464921 kubelet[1420]: E1213 02:00:19.464758 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:19.468987 env[1212]: time="2024-12-13T02:00:19.468903085Z" level=info msg="CreateContainer within sandbox \"924ec13c49b277721667d12b50d616e1bfd81aaca2c75be786741ee7c8763218\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:00:19.496183 env[1212]: time="2024-12-13T02:00:19.495929641Z" level=info msg="CreateContainer within sandbox \"924ec13c49b277721667d12b50d616e1bfd81aaca2c75be786741ee7c8763218\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"28cbd1c65967cfdd5df27bbd3bd35ee6ced790fec84cd465c7cc03cca46e3173\"" Dec 13 02:00:19.497112 env[1212]: time="2024-12-13T02:00:19.497066515Z" level=info msg="StartContainer for \"28cbd1c65967cfdd5df27bbd3bd35ee6ced790fec84cd465c7cc03cca46e3173\"" Dec 13 02:00:19.536880 systemd[1]: Started cri-containerd-28cbd1c65967cfdd5df27bbd3bd35ee6ced790fec84cd465c7cc03cca46e3173.scope. Dec 13 02:00:19.580155 env[1212]: time="2024-12-13T02:00:19.580076208Z" level=info msg="StartContainer for \"28cbd1c65967cfdd5df27bbd3bd35ee6ced790fec84cd465c7cc03cca46e3173\" returns successfully" Dec 13 02:00:19.588091 systemd[1]: cri-containerd-28cbd1c65967cfdd5df27bbd3bd35ee6ced790fec84cd465c7cc03cca46e3173.scope: Deactivated successfully. Dec 13 02:00:19.623385 env[1212]: time="2024-12-13T02:00:19.623311850Z" level=info msg="shim disconnected" id=28cbd1c65967cfdd5df27bbd3bd35ee6ced790fec84cd465c7cc03cca46e3173 Dec 13 02:00:19.623385 env[1212]: time="2024-12-13T02:00:19.623380528Z" level=warning msg="cleaning up after shim disconnected" id=28cbd1c65967cfdd5df27bbd3bd35ee6ced790fec84cd465c7cc03cca46e3173 namespace=k8s.io Dec 13 02:00:19.623385 env[1212]: time="2024-12-13T02:00:19.623393052Z" level=info msg="cleaning up dead shim" Dec 13 02:00:19.633323 env[1212]: time="2024-12-13T02:00:19.633251262Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:00:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3184 runtime=io.containerd.runc.v2\n" Dec 13 02:00:19.815689 kubelet[1420]: E1213 02:00:19.815608 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:19.831723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28cbd1c65967cfdd5df27bbd3bd35ee6ced790fec84cd465c7cc03cca46e3173-rootfs.mount: Deactivated successfully. Dec 13 02:00:20.102611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3661883220.mount: Deactivated successfully. Dec 13 02:00:20.467475 kubelet[1420]: E1213 02:00:20.467257 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:20.469131 env[1212]: time="2024-12-13T02:00:20.469077776Z" level=info msg="CreateContainer within sandbox \"924ec13c49b277721667d12b50d616e1bfd81aaca2c75be786741ee7c8763218\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:00:20.736569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3930935618.mount: Deactivated successfully. Dec 13 02:00:20.741579 env[1212]: time="2024-12-13T02:00:20.741512142Z" level=info msg="CreateContainer within sandbox \"924ec13c49b277721667d12b50d616e1bfd81aaca2c75be786741ee7c8763218\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ce6c005747313f566a0dd18770699ccfaab1c7bc00313bf5bd66df3f82fc4fbe\"" Dec 13 02:00:20.742056 env[1212]: time="2024-12-13T02:00:20.742032088Z" level=info msg="StartContainer for \"ce6c005747313f566a0dd18770699ccfaab1c7bc00313bf5bd66df3f82fc4fbe\"" Dec 13 02:00:20.756652 systemd[1]: Started cri-containerd-ce6c005747313f566a0dd18770699ccfaab1c7bc00313bf5bd66df3f82fc4fbe.scope. Dec 13 02:00:20.779669 env[1212]: time="2024-12-13T02:00:20.779540150Z" level=info msg="StartContainer for \"ce6c005747313f566a0dd18770699ccfaab1c7bc00313bf5bd66df3f82fc4fbe\" returns successfully" Dec 13 02:00:20.781703 systemd[1]: cri-containerd-ce6c005747313f566a0dd18770699ccfaab1c7bc00313bf5bd66df3f82fc4fbe.scope: Deactivated successfully. Dec 13 02:00:20.809183 env[1212]: time="2024-12-13T02:00:20.809122711Z" level=info msg="shim disconnected" id=ce6c005747313f566a0dd18770699ccfaab1c7bc00313bf5bd66df3f82fc4fbe Dec 13 02:00:20.809183 env[1212]: time="2024-12-13T02:00:20.809185559Z" level=warning msg="cleaning up after shim disconnected" id=ce6c005747313f566a0dd18770699ccfaab1c7bc00313bf5bd66df3f82fc4fbe namespace=k8s.io Dec 13 02:00:20.809394 env[1212]: time="2024-12-13T02:00:20.809196009Z" level=info msg="cleaning up dead shim" Dec 13 02:00:20.815841 kubelet[1420]: E1213 02:00:20.815792 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:20.816191 env[1212]: time="2024-12-13T02:00:20.815969646Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:00:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3241 runtime=io.containerd.runc.v2\n" Dec 13 02:00:20.913587 kubelet[1420]: I1213 02:00:20.913522 1420 setters.go:568] "Node became not ready" node="10.0.0.55" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:00:20Z","lastTransitionTime":"2024-12-13T02:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 02:00:21.470822 kubelet[1420]: E1213 02:00:21.470792 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:21.472395 env[1212]: time="2024-12-13T02:00:21.472360437Z" level=info msg="CreateContainer within sandbox \"924ec13c49b277721667d12b50d616e1bfd81aaca2c75be786741ee7c8763218\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:00:21.816013 kubelet[1420]: E1213 02:00:21.815963 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:22.301178 env[1212]: time="2024-12-13T02:00:22.301099961Z" level=info msg="CreateContainer within sandbox \"924ec13c49b277721667d12b50d616e1bfd81aaca2c75be786741ee7c8763218\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"884f3eb34c182fb07c6acaaedda8a9fd4fde7d4e3bb58f59cb192306ed28b8fa\"" Dec 13 02:00:22.301823 env[1212]: time="2024-12-13T02:00:22.301772754Z" level=info msg="StartContainer for \"884f3eb34c182fb07c6acaaedda8a9fd4fde7d4e3bb58f59cb192306ed28b8fa\"" Dec 13 02:00:22.319300 systemd[1]: Started cri-containerd-884f3eb34c182fb07c6acaaedda8a9fd4fde7d4e3bb58f59cb192306ed28b8fa.scope. Dec 13 02:00:22.338781 systemd[1]: cri-containerd-884f3eb34c182fb07c6acaaedda8a9fd4fde7d4e3bb58f59cb192306ed28b8fa.scope: Deactivated successfully. Dec 13 02:00:22.367178 env[1212]: time="2024-12-13T02:00:22.367114546Z" level=info msg="StartContainer for \"884f3eb34c182fb07c6acaaedda8a9fd4fde7d4e3bb58f59cb192306ed28b8fa\" returns successfully" Dec 13 02:00:22.463854 env[1212]: time="2024-12-13T02:00:22.463768269Z" level=info msg="shim disconnected" id=884f3eb34c182fb07c6acaaedda8a9fd4fde7d4e3bb58f59cb192306ed28b8fa Dec 13 02:00:22.463854 env[1212]: time="2024-12-13T02:00:22.463827951Z" level=warning msg="cleaning up after shim disconnected" id=884f3eb34c182fb07c6acaaedda8a9fd4fde7d4e3bb58f59cb192306ed28b8fa namespace=k8s.io Dec 13 02:00:22.463854 env[1212]: time="2024-12-13T02:00:22.463837619Z" level=info msg="cleaning up dead shim" Dec 13 02:00:22.470469 env[1212]: time="2024-12-13T02:00:22.470408123Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:00:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3294 runtime=io.containerd.runc.v2\n" Dec 13 02:00:22.476972 kubelet[1420]: E1213 02:00:22.476937 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:22.478768 env[1212]: time="2024-12-13T02:00:22.478731809Z" level=info msg="CreateContainer within sandbox \"924ec13c49b277721667d12b50d616e1bfd81aaca2c75be786741ee7c8763218\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:00:22.502774 env[1212]: time="2024-12-13T02:00:22.502713308Z" level=info msg="CreateContainer within sandbox \"924ec13c49b277721667d12b50d616e1bfd81aaca2c75be786741ee7c8763218\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"794f69e4e1909d194b4f46bb380db91bd1732bf078eb8bff3d4916836b6d8cf5\"" Dec 13 02:00:22.503131 env[1212]: time="2024-12-13T02:00:22.503109512Z" level=info msg="StartContainer for \"794f69e4e1909d194b4f46bb380db91bd1732bf078eb8bff3d4916836b6d8cf5\"" Dec 13 02:00:22.516443 systemd[1]: Started cri-containerd-794f69e4e1909d194b4f46bb380db91bd1732bf078eb8bff3d4916836b6d8cf5.scope. Dec 13 02:00:22.545683 env[1212]: time="2024-12-13T02:00:22.544483551Z" level=info msg="StartContainer for \"794f69e4e1909d194b4f46bb380db91bd1732bf078eb8bff3d4916836b6d8cf5\" returns successfully" Dec 13 02:00:22.816431 kubelet[1420]: E1213 02:00:22.816384 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:22.874600 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 02:00:22.901927 env[1212]: time="2024-12-13T02:00:22.901889790Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:22.903827 env[1212]: time="2024-12-13T02:00:22.903801167Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:22.905650 env[1212]: time="2024-12-13T02:00:22.905598412Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:22.906376 env[1212]: time="2024-12-13T02:00:22.906324114Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:00:22.908479 env[1212]: time="2024-12-13T02:00:22.908422012Z" level=info msg="CreateContainer within sandbox \"5c87bfab76b955de9d6f5aa21e9bee98015cf2ac12b643b7edecd600231d7803\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:00:22.923747 env[1212]: time="2024-12-13T02:00:22.923680967Z" level=info msg="CreateContainer within sandbox \"5c87bfab76b955de9d6f5aa21e9bee98015cf2ac12b643b7edecd600231d7803\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"665be261b083b6c105095d2950b36ac969bbb2a86c91cd546ebc4e7e0ddb0ce9\"" Dec 13 02:00:22.924223 env[1212]: time="2024-12-13T02:00:22.924193219Z" level=info msg="StartContainer for \"665be261b083b6c105095d2950b36ac969bbb2a86c91cd546ebc4e7e0ddb0ce9\"" Dec 13 02:00:22.937906 systemd[1]: Started cri-containerd-665be261b083b6c105095d2950b36ac969bbb2a86c91cd546ebc4e7e0ddb0ce9.scope. Dec 13 02:00:23.004867 env[1212]: time="2024-12-13T02:00:23.004809956Z" level=info msg="StartContainer for \"665be261b083b6c105095d2950b36ac969bbb2a86c91cd546ebc4e7e0ddb0ce9\" returns successfully" Dec 13 02:00:23.143007 systemd[1]: run-containerd-runc-k8s.io-884f3eb34c182fb07c6acaaedda8a9fd4fde7d4e3bb58f59cb192306ed28b8fa-runc.jhPLGs.mount: Deactivated successfully. Dec 13 02:00:23.143092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-884f3eb34c182fb07c6acaaedda8a9fd4fde7d4e3bb58f59cb192306ed28b8fa-rootfs.mount: Deactivated successfully. Dec 13 02:00:23.480138 kubelet[1420]: E1213 02:00:23.480100 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:23.483455 kubelet[1420]: E1213 02:00:23.483408 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:23.488204 kubelet[1420]: I1213 02:00:23.488172 1420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-cjntg" podStartSLOduration=1.427519665 podStartE2EDuration="8.48812123s" podCreationTimestamp="2024-12-13 02:00:15 +0000 UTC" firstStartedPulling="2024-12-13 02:00:15.846030868 +0000 UTC m=+67.400540600" lastFinishedPulling="2024-12-13 02:00:22.906632433 +0000 UTC m=+74.461142165" observedRunningTime="2024-12-13 02:00:23.488045177 +0000 UTC m=+75.042554910" watchObservedRunningTime="2024-12-13 02:00:23.48812123 +0000 UTC m=+75.042630972" Dec 13 02:00:23.555149 kubelet[1420]: I1213 02:00:23.555086 1420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-b5bjn" podStartSLOduration=6.555035396 podStartE2EDuration="6.555035396s" podCreationTimestamp="2024-12-13 02:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:00:23.554827696 +0000 UTC m=+75.109337418" watchObservedRunningTime="2024-12-13 02:00:23.555035396 +0000 UTC m=+75.109545128" Dec 13 02:00:23.816973 kubelet[1420]: E1213 02:00:23.816803 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:24.485867 kubelet[1420]: E1213 02:00:24.485816 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:24.486142 kubelet[1420]: E1213 02:00:24.486105 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:24.817768 kubelet[1420]: E1213 02:00:24.817544 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:25.488823 kubelet[1420]: E1213 02:00:25.488771 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:25.705449 systemd-networkd[1032]: lxc_health: Link UP Dec 13 02:00:25.719143 systemd-networkd[1032]: lxc_health: Gained carrier Dec 13 02:00:25.719754 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:00:25.818837 kubelet[1420]: E1213 02:00:25.818696 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:26.175696 systemd[1]: run-containerd-runc-k8s.io-794f69e4e1909d194b4f46bb380db91bd1732bf078eb8bff3d4916836b6d8cf5-runc.Ko9EFH.mount: Deactivated successfully. Dec 13 02:00:26.491043 kubelet[1420]: E1213 02:00:26.490820 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:26.819634 kubelet[1420]: E1213 02:00:26.819447 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:27.106425 systemd-networkd[1032]: lxc_health: Gained IPv6LL Dec 13 02:00:27.492222 kubelet[1420]: E1213 02:00:27.492166 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:27.820947 kubelet[1420]: E1213 02:00:27.820744 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:28.310279 systemd[1]: run-containerd-runc-k8s.io-794f69e4e1909d194b4f46bb380db91bd1732bf078eb8bff3d4916836b6d8cf5-runc.V1HsrG.mount: Deactivated successfully. Dec 13 02:00:28.495118 kubelet[1420]: E1213 02:00:28.495053 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:28.751700 kubelet[1420]: E1213 02:00:28.751624 1420 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:28.821392 kubelet[1420]: E1213 02:00:28.821305 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:29.821838 kubelet[1420]: E1213 02:00:29.821785 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:30.823595 kubelet[1420]: E1213 02:00:30.823514 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:31.824476 kubelet[1420]: E1213 02:00:31.824415 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:32.825464 kubelet[1420]: E1213 02:00:32.825404 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:00:33.826328 kubelet[1420]: E1213 02:00:33.826263 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"