Dec 13 02:07:26.845373 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 02:07:26.845391 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:07:26.845400 kernel: BIOS-provided physical RAM map: Dec 13 02:07:26.845405 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 02:07:26.845411 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 02:07:26.845416 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 02:07:26.845422 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 02:07:26.845428 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 02:07:26.845436 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 02:07:26.845441 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 02:07:26.845447 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 02:07:26.845452 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 02:07:26.845457 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 02:07:26.845463 kernel: NX (Execute Disable) protection: active Dec 13 02:07:26.845471 kernel: SMBIOS 2.8 present. Dec 13 02:07:26.845477 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 02:07:26.845483 kernel: Hypervisor detected: KVM Dec 13 02:07:26.845489 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:07:26.845495 kernel: kvm-clock: cpu 0, msr 6a19b001, primary cpu clock Dec 13 02:07:26.845500 kernel: kvm-clock: using sched offset of 2390953732 cycles Dec 13 02:07:26.845507 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:07:26.845513 kernel: tsc: Detected 2794.748 MHz processor Dec 13 02:07:26.845519 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:07:26.845527 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:07:26.845555 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 02:07:26.845569 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:07:26.845576 kernel: Using GB pages for direct mapping Dec 13 02:07:26.845582 kernel: ACPI: Early table checksum verification disabled Dec 13 02:07:26.845588 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 02:07:26.845594 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:26.845600 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:26.845606 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:26.845614 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 02:07:26.845620 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:26.845626 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:26.845632 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:26.845638 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:26.845644 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 02:07:26.845650 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 02:07:26.845656 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 02:07:26.845666 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 02:07:26.845672 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 02:07:26.845679 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 02:07:26.845685 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 02:07:26.845691 kernel: No NUMA configuration found Dec 13 02:07:26.845698 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 02:07:26.845705 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 02:07:26.845712 kernel: Zone ranges: Dec 13 02:07:26.845718 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:07:26.845724 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 02:07:26.845731 kernel: Normal empty Dec 13 02:07:26.845737 kernel: Movable zone start for each node Dec 13 02:07:26.845743 kernel: Early memory node ranges Dec 13 02:07:26.845756 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 02:07:26.845763 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 02:07:26.845770 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 02:07:26.845779 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:07:26.845787 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 02:07:26.845796 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 02:07:26.845806 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 02:07:26.845814 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:07:26.845822 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 02:07:26.845830 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 02:07:26.845838 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:07:26.845847 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:07:26.845857 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:07:26.845866 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:07:26.845874 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:07:26.845891 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 02:07:26.845900 kernel: TSC deadline timer available Dec 13 02:07:26.845907 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 02:07:26.845915 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 02:07:26.845923 kernel: kvm-guest: setup PV sched yield Dec 13 02:07:26.845931 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 02:07:26.845939 kernel: Booting paravirtualized kernel on KVM Dec 13 02:07:26.845946 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:07:26.845953 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Dec 13 02:07:26.845959 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Dec 13 02:07:26.845966 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Dec 13 02:07:26.845972 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 02:07:26.845978 kernel: kvm-guest: setup async PF for cpu 0 Dec 13 02:07:26.845984 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Dec 13 02:07:26.845991 kernel: kvm-guest: PV spinlocks enabled Dec 13 02:07:26.845999 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 02:07:26.846005 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 02:07:26.846011 kernel: Policy zone: DMA32 Dec 13 02:07:26.846029 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:07:26.846036 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:07:26.846056 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 02:07:26.846062 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:07:26.846069 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:07:26.846078 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 134796K reserved, 0K cma-reserved) Dec 13 02:07:26.846084 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 02:07:26.846091 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 02:07:26.846097 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 02:07:26.846104 kernel: rcu: Hierarchical RCU implementation. Dec 13 02:07:26.846111 kernel: rcu: RCU event tracing is enabled. Dec 13 02:07:26.846117 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 02:07:26.846124 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:07:26.846130 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:07:26.846138 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:07:26.846145 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 02:07:26.846151 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 02:07:26.846158 kernel: random: crng init done Dec 13 02:07:26.846164 kernel: Console: colour VGA+ 80x25 Dec 13 02:07:26.846171 kernel: printk: console [ttyS0] enabled Dec 13 02:07:26.846177 kernel: ACPI: Core revision 20210730 Dec 13 02:07:26.846184 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 02:07:26.846190 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:07:26.846198 kernel: x2apic enabled Dec 13 02:07:26.846205 kernel: Switched APIC routing to physical x2apic. Dec 13 02:07:26.846211 kernel: kvm-guest: setup PV IPIs Dec 13 02:07:26.846218 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 02:07:26.846224 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 02:07:26.846231 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 02:07:26.846237 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 02:07:26.846244 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 02:07:26.846251 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 02:07:26.846263 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:07:26.846269 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 02:07:26.846276 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:07:26.846284 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:07:26.846291 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 02:07:26.846298 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 02:07:26.846305 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 02:07:26.846311 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 02:07:26.846319 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:07:26.846327 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:07:26.846334 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:07:26.846341 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:07:26.846348 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 02:07:26.846354 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:07:26.846361 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:07:26.846368 kernel: LSM: Security Framework initializing Dec 13 02:07:26.846375 kernel: SELinux: Initializing. Dec 13 02:07:26.846383 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 02:07:26.846390 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 02:07:26.846397 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 02:07:26.846404 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 02:07:26.846411 kernel: ... version: 0 Dec 13 02:07:26.846418 kernel: ... bit width: 48 Dec 13 02:07:26.846424 kernel: ... generic registers: 6 Dec 13 02:07:26.846431 kernel: ... value mask: 0000ffffffffffff Dec 13 02:07:26.846438 kernel: ... max period: 00007fffffffffff Dec 13 02:07:26.846446 kernel: ... fixed-purpose events: 0 Dec 13 02:07:26.846453 kernel: ... event mask: 000000000000003f Dec 13 02:07:26.846460 kernel: signal: max sigframe size: 1776 Dec 13 02:07:26.846467 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:07:26.846483 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:07:26.846490 kernel: x86: Booting SMP configuration: Dec 13 02:07:26.846497 kernel: .... node #0, CPUs: #1 Dec 13 02:07:26.846504 kernel: kvm-clock: cpu 1, msr 6a19b041, secondary cpu clock Dec 13 02:07:26.846511 kernel: kvm-guest: setup async PF for cpu 1 Dec 13 02:07:26.846519 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Dec 13 02:07:26.846525 kernel: #2 Dec 13 02:07:26.846532 kernel: kvm-clock: cpu 2, msr 6a19b081, secondary cpu clock Dec 13 02:07:26.846539 kernel: kvm-guest: setup async PF for cpu 2 Dec 13 02:07:26.846546 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Dec 13 02:07:26.846552 kernel: #3 Dec 13 02:07:26.846559 kernel: kvm-clock: cpu 3, msr 6a19b0c1, secondary cpu clock Dec 13 02:07:26.846566 kernel: kvm-guest: setup async PF for cpu 3 Dec 13 02:07:26.846572 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Dec 13 02:07:26.846581 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 02:07:26.846587 kernel: smpboot: Max logical packages: 1 Dec 13 02:07:26.846594 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 02:07:26.846601 kernel: devtmpfs: initialized Dec 13 02:07:26.846608 kernel: x86/mm: Memory block size: 128MB Dec 13 02:07:26.846615 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:07:26.846622 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 02:07:26.846628 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:07:26.846635 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:07:26.846643 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:07:26.846650 kernel: audit: type=2000 audit(1734055646.718:1): state=initialized audit_enabled=0 res=1 Dec 13 02:07:26.846657 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:07:26.846664 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:07:26.846672 kernel: cpuidle: using governor menu Dec 13 02:07:26.846680 kernel: ACPI: bus type PCI registered Dec 13 02:07:26.846686 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:07:26.846693 kernel: dca service started, version 1.12.1 Dec 13 02:07:26.846700 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 02:07:26.846707 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 02:07:26.846715 kernel: PCI: Using configuration type 1 for base access Dec 13 02:07:26.846722 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:07:26.846729 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:07:26.846736 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:07:26.846742 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:07:26.846749 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:07:26.846756 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:07:26.846763 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:07:26.846769 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 02:07:26.846777 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 02:07:26.846784 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 02:07:26.846791 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 02:07:26.846797 kernel: ACPI: Interpreter enabled Dec 13 02:07:26.846804 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 02:07:26.846811 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:07:26.846818 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:07:26.846825 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 02:07:26.846831 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:07:26.846957 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:07:26.847041 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 02:07:26.847110 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 02:07:26.847120 kernel: PCI host bridge to bus 0000:00 Dec 13 02:07:26.847191 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:07:26.847255 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:07:26.847320 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:07:26.847382 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 02:07:26.847441 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 02:07:26.847503 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 02:07:26.847565 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:07:26.847646 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 02:07:26.847723 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 02:07:26.847824 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 02:07:26.847910 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 02:07:26.847993 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 02:07:26.848113 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 02:07:26.848198 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 02:07:26.848272 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 02:07:26.848351 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 02:07:26.848430 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 02:07:26.848536 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 02:07:26.848618 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 02:07:26.848697 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 02:07:26.848776 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 02:07:26.848878 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 02:07:26.848969 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 02:07:26.849055 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 02:07:26.849129 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 02:07:26.849201 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 02:07:26.849283 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 02:07:26.849354 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 02:07:26.849433 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 02:07:26.849510 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 02:07:26.849582 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 02:07:26.849663 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 02:07:26.849735 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 02:07:26.849745 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:07:26.849753 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:07:26.849760 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:07:26.849769 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:07:26.849776 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 02:07:26.849782 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 02:07:26.849790 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 02:07:26.849796 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 02:07:26.849803 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 02:07:26.849810 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 02:07:26.849817 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 02:07:26.849823 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 02:07:26.849832 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 02:07:26.849839 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 02:07:26.849846 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 02:07:26.849852 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 02:07:26.849859 kernel: iommu: Default domain type: Translated Dec 13 02:07:26.849866 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:07:26.849951 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 02:07:26.850039 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 02:07:26.851088 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 02:07:26.851133 kernel: vgaarb: loaded Dec 13 02:07:26.851145 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 02:07:26.851156 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 02:07:26.851167 kernel: PTP clock support registered Dec 13 02:07:26.851177 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:07:26.851187 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:07:26.851198 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 02:07:26.851209 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 02:07:26.851219 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 02:07:26.851232 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 02:07:26.851242 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:07:26.851252 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:07:26.851262 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:07:26.851272 kernel: pnp: PnP ACPI init Dec 13 02:07:26.851381 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 02:07:26.851400 kernel: pnp: PnP ACPI: found 6 devices Dec 13 02:07:26.851410 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:07:26.851423 kernel: NET: Registered PF_INET protocol family Dec 13 02:07:26.851434 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 02:07:26.851445 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 02:07:26.851455 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:07:26.851466 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 02:07:26.851476 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 02:07:26.851486 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 02:07:26.851497 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 02:07:26.851506 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 02:07:26.851518 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:07:26.851528 kernel: NET: Registered PF_XDP protocol family Dec 13 02:07:26.851627 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:07:26.851718 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:07:26.851821 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:07:26.851921 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 02:07:26.852008 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 02:07:26.852122 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 02:07:26.852141 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:07:26.852151 kernel: Initialise system trusted keyrings Dec 13 02:07:26.852161 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 02:07:26.852171 kernel: Key type asymmetric registered Dec 13 02:07:26.852181 kernel: Asymmetric key parser 'x509' registered Dec 13 02:07:26.852191 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 02:07:26.852201 kernel: io scheduler mq-deadline registered Dec 13 02:07:26.852212 kernel: io scheduler kyber registered Dec 13 02:07:26.852222 kernel: io scheduler bfq registered Dec 13 02:07:26.852234 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:07:26.852245 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 02:07:26.852256 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 02:07:26.852265 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 02:07:26.852276 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:07:26.852286 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:07:26.852297 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:07:26.852307 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:07:26.852317 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:07:26.852414 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 02:07:26.852429 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:07:26.852507 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 02:07:26.852586 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T02:07:26 UTC (1734055646) Dec 13 02:07:26.852672 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 02:07:26.852687 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:07:26.852697 kernel: Segment Routing with IPv6 Dec 13 02:07:26.852707 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:07:26.852721 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:07:26.852731 kernel: Key type dns_resolver registered Dec 13 02:07:26.852740 kernel: IPI shorthand broadcast: enabled Dec 13 02:07:26.852751 kernel: sched_clock: Marking stable (396220324, 101296859)->(544691561, -47174378) Dec 13 02:07:26.852761 kernel: registered taskstats version 1 Dec 13 02:07:26.852771 kernel: Loading compiled-in X.509 certificates Dec 13 02:07:26.852782 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 02:07:26.852792 kernel: Key type .fscrypt registered Dec 13 02:07:26.852802 kernel: Key type fscrypt-provisioning registered Dec 13 02:07:26.852814 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 02:07:26.852824 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:07:26.852834 kernel: ima: No architecture policies found Dec 13 02:07:26.852843 kernel: clk: Disabling unused clocks Dec 13 02:07:26.852853 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 02:07:26.852863 kernel: Write protecting the kernel read-only data: 28672k Dec 13 02:07:26.852872 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 02:07:26.852890 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 02:07:26.852899 kernel: Run /init as init process Dec 13 02:07:26.852911 kernel: with arguments: Dec 13 02:07:26.852920 kernel: /init Dec 13 02:07:26.852929 kernel: with environment: Dec 13 02:07:26.852938 kernel: HOME=/ Dec 13 02:07:26.852947 kernel: TERM=linux Dec 13 02:07:26.852957 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:07:26.852971 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:07:26.852984 systemd[1]: Detected virtualization kvm. Dec 13 02:07:26.852996 systemd[1]: Detected architecture x86-64. Dec 13 02:07:26.853006 systemd[1]: Running in initrd. Dec 13 02:07:26.853029 systemd[1]: No hostname configured, using default hostname. Dec 13 02:07:26.853040 systemd[1]: Hostname set to . Dec 13 02:07:26.853051 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:07:26.853062 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:07:26.853073 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:07:26.853084 systemd[1]: Reached target cryptsetup.target. Dec 13 02:07:26.853097 systemd[1]: Reached target paths.target. Dec 13 02:07:26.853124 systemd[1]: Reached target slices.target. Dec 13 02:07:26.853137 systemd[1]: Reached target swap.target. Dec 13 02:07:26.853148 systemd[1]: Reached target timers.target. Dec 13 02:07:26.853160 systemd[1]: Listening on iscsid.socket. Dec 13 02:07:26.853173 systemd[1]: Listening on iscsiuio.socket. Dec 13 02:07:26.853184 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:07:26.853195 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:07:26.853206 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:07:26.853216 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:07:26.853227 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:07:26.853237 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:07:26.853248 systemd[1]: Reached target sockets.target. Dec 13 02:07:26.853259 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:07:26.853279 systemd[1]: Finished network-cleanup.service. Dec 13 02:07:26.853291 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:07:26.853302 systemd[1]: Starting systemd-journald.service... Dec 13 02:07:26.853313 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:07:26.853324 systemd[1]: Starting systemd-resolved.service... Dec 13 02:07:26.853335 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 02:07:26.853346 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:07:26.853357 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:07:26.853369 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:07:26.853386 systemd-journald[199]: Journal started Dec 13 02:07:26.853444 systemd-journald[199]: Runtime Journal (/run/log/journal/f6f828bb351046109c1b18185d028ec8) is 6.0M, max 48.5M, 42.5M free. Dec 13 02:07:26.843389 systemd-modules-load[200]: Inserted module 'overlay' Dec 13 02:07:26.877499 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:07:26.868538 systemd-resolved[201]: Positive Trust Anchors: Dec 13 02:07:26.868544 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:07:26.885379 systemd[1]: Started systemd-journald.service. Dec 13 02:07:26.885399 kernel: audit: type=1130 audit(1734055646.880:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:26.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:26.868571 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:07:26.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:26.870672 systemd-resolved[201]: Defaulting to hostname 'linux'. Dec 13 02:07:26.898424 kernel: audit: type=1130 audit(1734055646.885:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:26.898441 kernel: audit: type=1130 audit(1734055646.888:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:26.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:26.881068 systemd[1]: Started systemd-resolved.service. Dec 13 02:07:26.902896 kernel: audit: type=1130 audit(1734055646.898:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:26.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:26.886006 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 02:07:26.889143 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:07:26.898973 systemd[1]: Reached target nss-lookup.target. Dec 13 02:07:26.903864 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 02:07:26.908058 kernel: Bridge firewalling registered Dec 13 02:07:26.908083 systemd-modules-load[200]: Inserted module 'br_netfilter' Dec 13 02:07:26.913912 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 02:07:26.918630 kernel: audit: type=1130 audit(1734055646.914:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:26.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:26.914851 systemd[1]: Starting dracut-cmdline.service... Dec 13 02:07:26.922874 dracut-cmdline[218]: dracut-dracut-053 Dec 13 02:07:26.924439 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:07:26.937043 kernel: SCSI subsystem initialized Dec 13 02:07:26.947044 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:07:26.947071 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:07:26.949010 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 02:07:26.951656 systemd-modules-load[200]: Inserted module 'dm_multipath' Dec 13 02:07:26.953312 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:07:26.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:26.954769 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:07:26.959265 kernel: audit: type=1130 audit(1734055646.954:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:26.964636 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:07:26.969333 kernel: audit: type=1130 audit(1734055646.964:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:26.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:26.973034 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:07:26.989040 kernel: iscsi: registered transport (tcp) Dec 13 02:07:27.009282 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:07:27.009309 kernel: QLogic iSCSI HBA Driver Dec 13 02:07:27.030666 systemd[1]: Finished dracut-cmdline.service. Dec 13 02:07:27.035705 kernel: audit: type=1130 audit(1734055647.031:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:27.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:27.032285 systemd[1]: Starting dracut-pre-udev.service... Dec 13 02:07:27.076040 kernel: raid6: avx2x4 gen() 29666 MB/s Dec 13 02:07:27.093034 kernel: raid6: avx2x4 xor() 7490 MB/s Dec 13 02:07:27.110032 kernel: raid6: avx2x2 gen() 32009 MB/s Dec 13 02:07:27.127032 kernel: raid6: avx2x2 xor() 18528 MB/s Dec 13 02:07:27.144035 kernel: raid6: avx2x1 gen() 26484 MB/s Dec 13 02:07:27.161041 kernel: raid6: avx2x1 xor() 15161 MB/s Dec 13 02:07:27.178038 kernel: raid6: sse2x4 gen() 14592 MB/s Dec 13 02:07:27.195042 kernel: raid6: sse2x4 xor() 7057 MB/s Dec 13 02:07:27.212037 kernel: raid6: sse2x2 gen() 16290 MB/s Dec 13 02:07:27.229036 kernel: raid6: sse2x2 xor() 9774 MB/s Dec 13 02:07:27.246036 kernel: raid6: sse2x1 gen() 12032 MB/s Dec 13 02:07:27.263443 kernel: raid6: sse2x1 xor() 7674 MB/s Dec 13 02:07:27.263466 kernel: raid6: using algorithm avx2x2 gen() 32009 MB/s Dec 13 02:07:27.263475 kernel: raid6: .... xor() 18528 MB/s, rmw enabled Dec 13 02:07:27.264180 kernel: raid6: using avx2x2 recovery algorithm Dec 13 02:07:27.276037 kernel: xor: automatically using best checksumming function avx Dec 13 02:07:27.365043 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 02:07:27.372496 systemd[1]: Finished dracut-pre-udev.service. Dec 13 02:07:27.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:27.377048 kernel: audit: type=1130 audit(1734055647.373:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:27.376000 audit: BPF prog-id=7 op=LOAD Dec 13 02:07:27.376000 audit: BPF prog-id=8 op=LOAD Dec 13 02:07:27.377285 systemd[1]: Starting systemd-udevd.service... Dec 13 02:07:27.388424 systemd-udevd[401]: Using default interface naming scheme 'v252'. Dec 13 02:07:27.391788 systemd[1]: Started systemd-udevd.service. Dec 13 02:07:27.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:27.393621 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 02:07:27.402517 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Dec 13 02:07:27.424862 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 02:07:27.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:27.427203 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:07:27.459633 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:07:27.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:27.484438 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 02:07:27.512252 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:07:27.512267 kernel: libata version 3.00 loaded. Dec 13 02:07:27.512276 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:07:27.512285 kernel: AES CTR mode by8 optimization enabled Dec 13 02:07:27.512294 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:07:27.512303 kernel: GPT:9289727 != 19775487 Dec 13 02:07:27.512311 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:07:27.512324 kernel: GPT:9289727 != 19775487 Dec 13 02:07:27.512332 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:07:27.512341 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:07:27.519035 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 02:07:27.536649 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 02:07:27.536662 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 02:07:27.536747 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 02:07:27.536821 kernel: scsi host0: ahci Dec 13 02:07:27.536930 kernel: scsi host1: ahci Dec 13 02:07:27.537035 kernel: scsi host2: ahci Dec 13 02:07:27.537121 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (449) Dec 13 02:07:27.537130 kernel: scsi host3: ahci Dec 13 02:07:27.537213 kernel: scsi host4: ahci Dec 13 02:07:27.537296 kernel: scsi host5: ahci Dec 13 02:07:27.537384 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 02:07:27.537394 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 02:07:27.537402 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 02:07:27.537411 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 02:07:27.537420 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 02:07:27.537428 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 02:07:27.530215 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 02:07:27.568872 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 02:07:27.571351 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 02:07:27.575894 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 02:07:27.580813 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:07:27.582702 systemd[1]: Starting disk-uuid.service... Dec 13 02:07:27.661324 disk-uuid[543]: Primary Header is updated. Dec 13 02:07:27.661324 disk-uuid[543]: Secondary Entries is updated. Dec 13 02:07:27.661324 disk-uuid[543]: Secondary Header is updated. Dec 13 02:07:27.665040 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:07:27.846584 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 02:07:27.846637 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 02:07:27.846648 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 02:07:27.846659 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 02:07:27.848042 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 02:07:27.849048 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 02:07:27.850042 kernel: ata3.00: applying bridge limits Dec 13 02:07:27.851040 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 02:07:27.851050 kernel: ata3.00: configured for UDMA/100 Dec 13 02:07:27.852040 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 02:07:27.881082 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 02:07:27.898705 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 02:07:27.898718 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 02:07:28.704047 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:07:28.704108 disk-uuid[544]: The operation has completed successfully. Dec 13 02:07:28.728362 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:07:28.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:28.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:28.728445 systemd[1]: Finished disk-uuid.service. Dec 13 02:07:28.732407 systemd[1]: Starting verity-setup.service... Dec 13 02:07:28.746037 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 02:07:28.763869 systemd[1]: Found device dev-mapper-usr.device. Dec 13 02:07:28.766719 systemd[1]: Mounting sysusr-usr.mount... Dec 13 02:07:28.769932 systemd[1]: Finished verity-setup.service. Dec 13 02:07:28.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:28.825663 systemd[1]: Mounted sysusr-usr.mount. Dec 13 02:07:28.827036 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:07:28.826204 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 02:07:28.827081 systemd[1]: Starting ignition-setup.service... Dec 13 02:07:28.829672 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 02:07:28.840322 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:07:28.840343 kernel: BTRFS info (device vda6): using free space tree Dec 13 02:07:28.840352 kernel: BTRFS info (device vda6): has skinny extents Dec 13 02:07:28.847248 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:07:28.891100 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 02:07:28.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:28.895000 audit: BPF prog-id=9 op=LOAD Dec 13 02:07:28.896176 systemd[1]: Starting systemd-networkd.service... Dec 13 02:07:28.914866 systemd-networkd[720]: lo: Link UP Dec 13 02:07:28.914874 systemd-networkd[720]: lo: Gained carrier Dec 13 02:07:28.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:28.915262 systemd-networkd[720]: Enumeration completed Dec 13 02:07:28.915320 systemd[1]: Started systemd-networkd.service. Dec 13 02:07:28.915445 systemd-networkd[720]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:07:28.915685 systemd[1]: Reached target network.target. Dec 13 02:07:28.916336 systemd-networkd[720]: eth0: Link UP Dec 13 02:07:28.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:28.916339 systemd-networkd[720]: eth0: Gained carrier Dec 13 02:07:28.918474 systemd[1]: Starting iscsiuio.service... Dec 13 02:07:28.922407 systemd[1]: Started iscsiuio.service. Dec 13 02:07:28.923281 systemd[1]: Starting iscsid.service... Dec 13 02:07:28.927512 iscsid[725]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:07:28.927512 iscsid[725]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 02:07:28.927512 iscsid[725]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 02:07:28.927512 iscsid[725]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 02:07:28.927512 iscsid[725]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:07:28.927512 iscsid[725]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 02:07:28.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:28.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:28.927484 systemd[1]: Started iscsid.service. Dec 13 02:07:28.928857 systemd[1]: Starting dracut-initqueue.service... Dec 13 02:07:28.960082 systemd[1]: Finished dracut-initqueue.service. Dec 13 02:07:28.965905 systemd[1]: Reached target remote-fs-pre.target. Dec 13 02:07:28.967587 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:07:28.968083 systemd-networkd[720]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 02:07:28.969638 systemd[1]: Reached target remote-fs.target. Dec 13 02:07:28.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:28.972345 systemd[1]: Starting dracut-pre-mount.service... Dec 13 02:07:28.979259 systemd[1]: Finished dracut-pre-mount.service. Dec 13 02:07:29.092757 systemd[1]: Finished ignition-setup.service. Dec 13 02:07:29.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:29.094296 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 02:07:29.129726 ignition[740]: Ignition 2.14.0 Dec 13 02:07:29.129734 ignition[740]: Stage: fetch-offline Dec 13 02:07:29.129778 ignition[740]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:29.129786 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 02:07:29.129900 ignition[740]: parsed url from cmdline: "" Dec 13 02:07:29.129903 ignition[740]: no config URL provided Dec 13 02:07:29.129908 ignition[740]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:07:29.129915 ignition[740]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:07:29.129931 ignition[740]: op(1): [started] loading QEMU firmware config module Dec 13 02:07:29.129935 ignition[740]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 02:07:29.134263 ignition[740]: op(1): [finished] loading QEMU firmware config module Dec 13 02:07:29.136388 ignition[740]: parsing config with SHA512: 947e89b0d984caba77e57f03350e4e49dfba9ab19540eb1e6b6a45d479c2332b7d3bc9f6051a27a675eb57d5650285435c6f135d444fb2d7e19c424d9cd64b07 Dec 13 02:07:29.141716 unknown[740]: fetched base config from "system" Dec 13 02:07:29.141727 unknown[740]: fetched user config from "qemu" Dec 13 02:07:29.142057 ignition[740]: fetch-offline: fetch-offline passed Dec 13 02:07:29.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:29.142976 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 02:07:29.142121 ignition[740]: Ignition finished successfully Dec 13 02:07:29.144490 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 02:07:29.145202 systemd[1]: Starting ignition-kargs.service... Dec 13 02:07:29.153457 ignition[748]: Ignition 2.14.0 Dec 13 02:07:29.153465 ignition[748]: Stage: kargs Dec 13 02:07:29.153546 ignition[748]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:29.153556 ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 02:07:29.155813 systemd[1]: Finished ignition-kargs.service. Dec 13 02:07:29.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:29.154259 ignition[748]: kargs: kargs passed Dec 13 02:07:29.158263 systemd[1]: Starting ignition-disks.service... Dec 13 02:07:29.154305 ignition[748]: Ignition finished successfully Dec 13 02:07:29.164388 ignition[754]: Ignition 2.14.0 Dec 13 02:07:29.164398 ignition[754]: Stage: disks Dec 13 02:07:29.164479 ignition[754]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:29.165929 systemd[1]: Finished ignition-disks.service. Dec 13 02:07:29.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:29.164488 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 02:07:29.167558 systemd[1]: Reached target initrd-root-device.target. Dec 13 02:07:29.165118 ignition[754]: disks: disks passed Dec 13 02:07:29.169075 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:07:29.165146 ignition[754]: Ignition finished successfully Dec 13 02:07:29.169480 systemd[1]: Reached target local-fs.target. Dec 13 02:07:29.169648 systemd[1]: Reached target sysinit.target. Dec 13 02:07:29.169839 systemd[1]: Reached target basic.target. Dec 13 02:07:29.170808 systemd[1]: Starting systemd-fsck-root.service... Dec 13 02:07:29.182509 systemd-fsck[762]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 02:07:29.187832 systemd[1]: Finished systemd-fsck-root.service. Dec 13 02:07:29.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:29.188875 systemd[1]: Mounting sysroot.mount... Dec 13 02:07:29.195825 systemd[1]: Mounted sysroot.mount. Dec 13 02:07:29.197216 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:07:29.197231 systemd[1]: Reached target initrd-root-fs.target. Dec 13 02:07:29.199662 systemd[1]: Mounting sysroot-usr.mount... Dec 13 02:07:29.201297 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 02:07:29.201333 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:07:29.201352 systemd[1]: Reached target ignition-diskful.target. Dec 13 02:07:29.206639 systemd[1]: Mounted sysroot-usr.mount. Dec 13 02:07:29.208635 systemd[1]: Starting initrd-setup-root.service... Dec 13 02:07:29.212643 initrd-setup-root[772]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:07:29.215890 initrd-setup-root[780]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:07:29.219613 initrd-setup-root[788]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:07:29.222562 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:07:29.247103 systemd[1]: Finished initrd-setup-root.service. Dec 13 02:07:29.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:29.248041 systemd[1]: Starting ignition-mount.service... Dec 13 02:07:29.249412 systemd[1]: Starting sysroot-boot.service... Dec 13 02:07:29.253083 bash[813]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 02:07:29.260342 ignition[814]: INFO : Ignition 2.14.0 Dec 13 02:07:29.260342 ignition[814]: INFO : Stage: mount Dec 13 02:07:29.262923 ignition[814]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:29.262923 ignition[814]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 02:07:29.262923 ignition[814]: INFO : mount: mount passed Dec 13 02:07:29.262923 ignition[814]: INFO : Ignition finished successfully Dec 13 02:07:29.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:29.261763 systemd[1]: Finished ignition-mount.service. Dec 13 02:07:29.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:29.267422 systemd[1]: Finished sysroot-boot.service. Dec 13 02:07:29.776050 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:07:29.782040 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (825) Dec 13 02:07:29.784328 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:07:29.784340 kernel: BTRFS info (device vda6): using free space tree Dec 13 02:07:29.784349 kernel: BTRFS info (device vda6): has skinny extents Dec 13 02:07:29.788273 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:07:29.790660 systemd[1]: Starting ignition-files.service... Dec 13 02:07:29.805173 ignition[845]: INFO : Ignition 2.14.0 Dec 13 02:07:29.805173 ignition[845]: INFO : Stage: files Dec 13 02:07:29.806998 ignition[845]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:29.806998 ignition[845]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 02:07:29.806998 ignition[845]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:07:29.810689 ignition[845]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:07:29.810689 ignition[845]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:07:29.810689 ignition[845]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:07:29.810689 ignition[845]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:07:29.810689 ignition[845]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:07:29.810689 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:07:29.810689 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:07:29.810689 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:07:29.810689 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:07:29.810689 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:07:29.809659 unknown[845]: wrote ssh authorized keys file for user: core Dec 13 02:07:29.828362 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:07:29.828362 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:07:29.828362 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 02:07:30.154288 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 02:07:30.462194 systemd-networkd[720]: eth0: Gained IPv6LL Dec 13 02:07:30.505051 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:07:30.505051 ignition[845]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Dec 13 02:07:30.509033 ignition[845]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 02:07:30.509033 ignition[845]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 02:07:30.509033 ignition[845]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Dec 13 02:07:30.509033 ignition[845]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 02:07:30.509033 ignition[845]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 02:07:30.529105 ignition[845]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 02:07:30.531589 ignition[845]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 02:07:30.531589 ignition[845]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:07:30.531589 ignition[845]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:07:30.531589 ignition[845]: INFO : files: files passed Dec 13 02:07:30.531589 ignition[845]: INFO : Ignition finished successfully Dec 13 02:07:30.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.530534 systemd[1]: Finished ignition-files.service. Dec 13 02:07:30.532390 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 02:07:30.534061 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 02:07:30.547198 initrd-setup-root-after-ignition[869]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 02:07:30.534729 systemd[1]: Starting ignition-quench.service... Dec 13 02:07:30.549564 initrd-setup-root-after-ignition[872]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:07:30.537539 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:07:30.537607 systemd[1]: Finished ignition-quench.service. Dec 13 02:07:30.539222 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 02:07:30.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.540808 systemd[1]: Reached target ignition-complete.target. Dec 13 02:07:30.543033 systemd[1]: Starting initrd-parse-etc.service... Dec 13 02:07:30.552791 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:07:30.552857 systemd[1]: Finished initrd-parse-etc.service. Dec 13 02:07:30.554102 systemd[1]: Reached target initrd-fs.target. Dec 13 02:07:30.555752 systemd[1]: Reached target initrd.target. Dec 13 02:07:30.556562 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 02:07:30.557106 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 02:07:30.565789 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 02:07:30.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.567166 systemd[1]: Starting initrd-cleanup.service... Dec 13 02:07:30.589657 systemd[1]: Stopped target nss-lookup.target. Dec 13 02:07:30.590609 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 02:07:30.592270 systemd[1]: Stopped target timers.target. Dec 13 02:07:30.593875 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:07:30.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.593964 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 02:07:30.595521 systemd[1]: Stopped target initrd.target. Dec 13 02:07:30.597100 systemd[1]: Stopped target basic.target. Dec 13 02:07:30.598646 systemd[1]: Stopped target ignition-complete.target. Dec 13 02:07:30.600221 systemd[1]: Stopped target ignition-diskful.target. Dec 13 02:07:30.601819 systemd[1]: Stopped target initrd-root-device.target. Dec 13 02:07:30.603521 systemd[1]: Stopped target remote-fs.target. Dec 13 02:07:30.605139 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 02:07:30.606810 systemd[1]: Stopped target sysinit.target. Dec 13 02:07:30.608325 systemd[1]: Stopped target local-fs.target. Dec 13 02:07:30.609937 systemd[1]: Stopped target local-fs-pre.target. Dec 13 02:07:30.611462 systemd[1]: Stopped target swap.target. Dec 13 02:07:30.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.612906 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:07:30.612991 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 02:07:30.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.614551 systemd[1]: Stopped target cryptsetup.target. Dec 13 02:07:30.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.615984 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:07:30.616076 systemd[1]: Stopped dracut-initqueue.service. Dec 13 02:07:30.617807 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:07:30.617889 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 02:07:30.619424 systemd[1]: Stopped target paths.target. Dec 13 02:07:30.620875 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:07:30.624069 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 02:07:30.625505 systemd[1]: Stopped target slices.target. Dec 13 02:07:30.626954 systemd[1]: Stopped target sockets.target. Dec 13 02:07:30.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.628782 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:07:30.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.628846 systemd[1]: Closed iscsid.socket. Dec 13 02:07:30.630310 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:07:30.630395 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 02:07:30.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.639921 ignition[885]: INFO : Ignition 2.14.0 Dec 13 02:07:30.639921 ignition[885]: INFO : Stage: umount Dec 13 02:07:30.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.632008 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:07:30.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.645165 ignition[885]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:30.645165 ignition[885]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 02:07:30.645165 ignition[885]: INFO : umount: umount passed Dec 13 02:07:30.645165 ignition[885]: INFO : Ignition finished successfully Dec 13 02:07:30.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.632100 systemd[1]: Stopped ignition-files.service. Dec 13 02:07:30.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.634138 systemd[1]: Stopping ignition-mount.service... Dec 13 02:07:30.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.635143 systemd[1]: Stopping iscsiuio.service... Dec 13 02:07:30.636253 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:07:30.636375 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 02:07:30.638941 systemd[1]: Stopping sysroot-boot.service... Dec 13 02:07:30.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.639870 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:07:30.639987 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 02:07:30.640989 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:07:30.641109 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 02:07:30.644282 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 02:07:30.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.644396 systemd[1]: Stopped iscsiuio.service. Dec 13 02:07:30.645520 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:07:30.645595 systemd[1]: Stopped ignition-mount.service. Dec 13 02:07:30.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.647441 systemd[1]: Stopped target network.target. Dec 13 02:07:30.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.650626 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:07:30.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.650655 systemd[1]: Closed iscsiuio.socket. Dec 13 02:07:30.652091 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:07:30.652125 systemd[1]: Stopped ignition-disks.service. Dec 13 02:07:30.654027 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:07:30.654068 systemd[1]: Stopped ignition-kargs.service. Dec 13 02:07:30.655624 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:07:30.684000 audit: BPF prog-id=6 op=UNLOAD Dec 13 02:07:30.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.655653 systemd[1]: Stopped ignition-setup.service. Dec 13 02:07:30.656566 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:07:30.658138 systemd[1]: Stopping systemd-resolved.service... Dec 13 02:07:30.660328 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:07:30.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.660734 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:07:30.660808 systemd[1]: Finished initrd-cleanup.service. Dec 13 02:07:30.665046 systemd-networkd[720]: eth0: DHCPv6 lease lost Dec 13 02:07:30.666277 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:07:30.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.692000 audit: BPF prog-id=9 op=UNLOAD Dec 13 02:07:30.666380 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:07:30.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.669217 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:07:30.669243 systemd[1]: Closed systemd-networkd.socket. Dec 13 02:07:30.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.671046 systemd[1]: Stopping network-cleanup.service... Dec 13 02:07:30.672228 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:07:30.672269 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 02:07:30.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.674043 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:07:30.674074 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:07:30.675662 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:07:30.675691 systemd[1]: Stopped systemd-modules-load.service. Dec 13 02:07:30.676399 systemd[1]: Stopping systemd-udevd.service... Dec 13 02:07:30.677286 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:07:30.677625 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:07:30.677698 systemd[1]: Stopped systemd-resolved.service. Dec 13 02:07:30.682687 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:07:30.682749 systemd[1]: Stopped network-cleanup.service. Dec 13 02:07:30.687062 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:07:30.687168 systemd[1]: Stopped systemd-udevd.service. Dec 13 02:07:30.689133 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:07:30.689166 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 02:07:30.689910 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:07:30.689940 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 02:07:30.691380 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:07:30.691411 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 02:07:30.691728 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:07:30.691752 systemd[1]: Stopped dracut-cmdline.service. Dec 13 02:07:30.692234 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:07:30.692258 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 02:07:30.696365 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 02:07:30.697822 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:07:30.697861 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 02:07:30.701524 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:07:30.701589 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 02:07:30.842876 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:07:30.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.842982 systemd[1]: Stopped sysroot-boot.service. Dec 13 02:07:30.844062 systemd[1]: Reached target initrd-switch-root.target. Dec 13 02:07:30.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:30.845848 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:07:30.845937 systemd[1]: Stopped initrd-setup-root.service. Dec 13 02:07:30.848358 systemd[1]: Starting initrd-switch-root.service... Dec 13 02:07:30.862458 systemd[1]: Switching root. Dec 13 02:07:30.882893 iscsid[725]: iscsid shutting down. Dec 13 02:07:30.883643 systemd-journald[199]: Received SIGTERM from PID 1 (systemd). Dec 13 02:07:30.883685 systemd-journald[199]: Journal stopped Dec 13 02:07:33.317575 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 02:07:33.317616 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 02:07:33.317627 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 02:07:33.317636 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:07:33.317646 kernel: SELinux: policy capability open_perms=1 Dec 13 02:07:33.317656 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:07:33.317666 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:07:33.317675 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:07:33.317684 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:07:33.317701 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:07:33.317712 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:07:33.317722 systemd[1]: Successfully loaded SELinux policy in 38.260ms. Dec 13 02:07:33.317740 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.293ms. Dec 13 02:07:33.317751 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:07:33.317763 systemd[1]: Detected virtualization kvm. Dec 13 02:07:33.317773 systemd[1]: Detected architecture x86-64. Dec 13 02:07:33.317782 systemd[1]: Detected first boot. Dec 13 02:07:33.317792 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:07:33.317802 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 02:07:33.317812 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:07:33.317826 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:07:33.317839 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:07:33.317850 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:07:33.317860 kernel: kauditd_printk_skb: 79 callbacks suppressed Dec 13 02:07:33.317870 kernel: audit: type=1334 audit(1734055653.178:83): prog-id=12 op=LOAD Dec 13 02:07:33.317881 kernel: audit: type=1334 audit(1734055653.178:84): prog-id=3 op=UNLOAD Dec 13 02:07:33.317890 kernel: audit: type=1334 audit(1734055653.180:85): prog-id=13 op=LOAD Dec 13 02:07:33.317899 kernel: audit: type=1334 audit(1734055653.182:86): prog-id=14 op=LOAD Dec 13 02:07:33.317908 kernel: audit: type=1334 audit(1734055653.182:87): prog-id=4 op=UNLOAD Dec 13 02:07:33.317918 kernel: audit: type=1334 audit(1734055653.182:88): prog-id=5 op=UNLOAD Dec 13 02:07:33.317927 kernel: audit: type=1334 audit(1734055653.185:89): prog-id=15 op=LOAD Dec 13 02:07:33.317937 kernel: audit: type=1334 audit(1734055653.185:90): prog-id=12 op=UNLOAD Dec 13 02:07:33.317946 kernel: audit: type=1334 audit(1734055653.187:91): prog-id=16 op=LOAD Dec 13 02:07:33.317956 kernel: audit: type=1334 audit(1734055653.188:92): prog-id=17 op=LOAD Dec 13 02:07:33.317965 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 02:07:33.317975 systemd[1]: Stopped iscsid.service. Dec 13 02:07:33.317985 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 02:07:33.317996 systemd[1]: Stopped initrd-switch-root.service. Dec 13 02:07:33.318006 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 02:07:33.318028 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 02:07:33.318039 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 02:07:33.318049 systemd[1]: Created slice system-getty.slice. Dec 13 02:07:33.318059 systemd[1]: Created slice system-modprobe.slice. Dec 13 02:07:33.318069 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 02:07:33.318081 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 02:07:33.318091 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 02:07:33.318101 systemd[1]: Created slice user.slice. Dec 13 02:07:33.318110 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:07:33.318120 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 02:07:33.318130 systemd[1]: Set up automount boot.automount. Dec 13 02:07:33.318140 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 02:07:33.318150 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 02:07:33.318162 systemd[1]: Stopped target initrd-fs.target. Dec 13 02:07:33.318174 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 02:07:33.318185 systemd[1]: Reached target integritysetup.target. Dec 13 02:07:33.318195 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:07:33.318205 systemd[1]: Reached target remote-fs.target. Dec 13 02:07:33.318215 systemd[1]: Reached target slices.target. Dec 13 02:07:33.318225 systemd[1]: Reached target swap.target. Dec 13 02:07:33.318235 systemd[1]: Reached target torcx.target. Dec 13 02:07:33.318246 systemd[1]: Reached target veritysetup.target. Dec 13 02:07:33.318256 systemd[1]: Listening on systemd-coredump.socket. Dec 13 02:07:33.318266 systemd[1]: Listening on systemd-initctl.socket. Dec 13 02:07:33.318278 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:07:33.318288 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:07:33.318298 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:07:33.318308 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 02:07:33.318318 systemd[1]: Mounting dev-hugepages.mount... Dec 13 02:07:33.318328 systemd[1]: Mounting dev-mqueue.mount... Dec 13 02:07:33.318339 systemd[1]: Mounting media.mount... Dec 13 02:07:33.318349 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:07:33.318359 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 02:07:33.318369 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 02:07:33.318380 systemd[1]: Mounting tmp.mount... Dec 13 02:07:33.318390 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 02:07:33.318400 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:07:33.318410 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:07:33.318420 systemd[1]: Starting modprobe@configfs.service... Dec 13 02:07:33.318431 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:07:33.318441 systemd[1]: Starting modprobe@drm.service... Dec 13 02:07:33.318451 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:07:33.318461 systemd[1]: Starting modprobe@fuse.service... Dec 13 02:07:33.318470 systemd[1]: Starting modprobe@loop.service... Dec 13 02:07:33.318481 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:07:33.318491 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 02:07:33.318501 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 02:07:33.318511 kernel: fuse: init (API version 7.34) Dec 13 02:07:33.318522 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 02:07:33.318533 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 02:07:33.318543 systemd[1]: Stopped systemd-journald.service. Dec 13 02:07:33.318553 kernel: loop: module loaded Dec 13 02:07:33.318562 systemd[1]: Starting systemd-journald.service... Dec 13 02:07:33.318572 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:07:33.318582 systemd[1]: Starting systemd-network-generator.service... Dec 13 02:07:33.318593 systemd[1]: Starting systemd-remount-fs.service... Dec 13 02:07:33.318603 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:07:33.318614 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 02:07:33.318626 systemd-journald[1003]: Journal started Dec 13 02:07:33.318661 systemd-journald[1003]: Runtime Journal (/run/log/journal/f6f828bb351046109c1b18185d028ec8) is 6.0M, max 48.5M, 42.5M free. Dec 13 02:07:30.941000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:07:31.131000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:07:31.131000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:07:31.131000 audit: BPF prog-id=10 op=LOAD Dec 13 02:07:31.131000 audit: BPF prog-id=10 op=UNLOAD Dec 13 02:07:31.131000 audit: BPF prog-id=11 op=LOAD Dec 13 02:07:31.131000 audit: BPF prog-id=11 op=UNLOAD Dec 13 02:07:31.162000 audit[918]: AVC avc: denied { associate } for pid=918 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 02:07:31.162000 audit[918]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001558b2 a1=c0000d8de0 a2=c0000e10c0 a3=32 items=0 ppid=901 pid=918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:07:31.162000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:07:31.164000 audit[918]: AVC avc: denied { associate } for pid=918 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 02:07:31.164000 audit[918]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000155989 a2=1ed a3=0 items=2 ppid=901 pid=918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:07:31.164000 audit: CWD cwd="/" Dec 13 02:07:31.164000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:31.164000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:31.164000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:07:33.178000 audit: BPF prog-id=12 op=LOAD Dec 13 02:07:33.178000 audit: BPF prog-id=3 op=UNLOAD Dec 13 02:07:33.180000 audit: BPF prog-id=13 op=LOAD Dec 13 02:07:33.182000 audit: BPF prog-id=14 op=LOAD Dec 13 02:07:33.182000 audit: BPF prog-id=4 op=UNLOAD Dec 13 02:07:33.182000 audit: BPF prog-id=5 op=UNLOAD Dec 13 02:07:33.185000 audit: BPF prog-id=15 op=LOAD Dec 13 02:07:33.185000 audit: BPF prog-id=12 op=UNLOAD Dec 13 02:07:33.187000 audit: BPF prog-id=16 op=LOAD Dec 13 02:07:33.188000 audit: BPF prog-id=17 op=LOAD Dec 13 02:07:33.188000 audit: BPF prog-id=13 op=UNLOAD Dec 13 02:07:33.188000 audit: BPF prog-id=14 op=UNLOAD Dec 13 02:07:33.189000 audit: BPF prog-id=18 op=LOAD Dec 13 02:07:33.189000 audit: BPF prog-id=15 op=UNLOAD Dec 13 02:07:33.189000 audit: BPF prog-id=19 op=LOAD Dec 13 02:07:33.189000 audit: BPF prog-id=20 op=LOAD Dec 13 02:07:33.189000 audit: BPF prog-id=16 op=UNLOAD Dec 13 02:07:33.189000 audit: BPF prog-id=17 op=UNLOAD Dec 13 02:07:33.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.206000 audit: BPF prog-id=18 op=UNLOAD Dec 13 02:07:33.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.301000 audit: BPF prog-id=21 op=LOAD Dec 13 02:07:33.301000 audit: BPF prog-id=22 op=LOAD Dec 13 02:07:33.301000 audit: BPF prog-id=23 op=LOAD Dec 13 02:07:33.301000 audit: BPF prog-id=19 op=UNLOAD Dec 13 02:07:33.301000 audit: BPF prog-id=20 op=UNLOAD Dec 13 02:07:33.316000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:07:33.316000 audit[1003]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe947aeb40 a2=4000 a3=7ffe947aebdc items=0 ppid=1 pid=1003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:07:33.316000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:07:33.177193 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:07:31.160623 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:07:33.177203 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 02:07:31.160884 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:07:33.189810 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 02:07:31.160907 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:07:31.160940 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:31Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 02:07:31.160954 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:31Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 02:07:31.160987 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:31Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 02:07:31.161003 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:31Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 02:07:33.320470 systemd[1]: Stopped verity-setup.service. Dec 13 02:07:31.161232 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:31Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 02:07:31.161272 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:07:31.161288 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:07:31.161990 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 02:07:31.162045 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 02:07:31.162068 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 02:07:31.162087 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 02:07:31.162107 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 02:07:31.162125 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 02:07:32.930204 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:32Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:07:32.930521 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:32Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:07:32.930610 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:32Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:07:32.930789 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:32Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:07:32.930833 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:32Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 02:07:32.930883 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2024-12-13T02:07:32Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 02:07:33.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.324049 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:07:33.327632 systemd[1]: Started systemd-journald.service. Dec 13 02:07:33.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.328383 systemd[1]: Mounted dev-hugepages.mount. Dec 13 02:07:33.329488 systemd[1]: Mounted dev-mqueue.mount. Dec 13 02:07:33.330553 systemd[1]: Mounted media.mount. Dec 13 02:07:33.331553 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 02:07:33.332683 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 02:07:33.333861 systemd[1]: Mounted tmp.mount. Dec 13 02:07:33.335029 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 02:07:33.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.336449 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:07:33.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.337805 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:07:33.337960 systemd[1]: Finished modprobe@configfs.service. Dec 13 02:07:33.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.339315 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:07:33.339477 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:07:33.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.340908 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:07:33.341084 systemd[1]: Finished modprobe@drm.service. Dec 13 02:07:33.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.342365 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:07:33.342525 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:07:33.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.344062 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:07:33.344204 systemd[1]: Finished modprobe@fuse.service. Dec 13 02:07:33.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.345487 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:07:33.345637 systemd[1]: Finished modprobe@loop.service. Dec 13 02:07:33.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.346973 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:07:33.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.348444 systemd[1]: Finished systemd-network-generator.service. Dec 13 02:07:33.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.349916 systemd[1]: Finished systemd-remount-fs.service. Dec 13 02:07:33.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.351510 systemd[1]: Reached target network-pre.target. Dec 13 02:07:33.353394 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 02:07:33.355029 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 02:07:33.355784 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:07:33.356981 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 02:07:33.358665 systemd[1]: Starting systemd-journal-flush.service... Dec 13 02:07:33.359966 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:07:33.360641 systemd[1]: Starting systemd-random-seed.service... Dec 13 02:07:33.361672 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:07:33.362432 systemd-journald[1003]: Time spent on flushing to /var/log/journal/f6f828bb351046109c1b18185d028ec8 is 19.447ms for 1083 entries. Dec 13 02:07:33.362432 systemd-journald[1003]: System Journal (/var/log/journal/f6f828bb351046109c1b18185d028ec8) is 8.0M, max 195.6M, 187.6M free. Dec 13 02:07:33.399751 systemd-journald[1003]: Received client request to flush runtime journal. Dec 13 02:07:33.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.362363 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:07:33.365770 systemd[1]: Starting systemd-sysusers.service... Dec 13 02:07:33.369215 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 02:07:33.370747 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 02:07:33.400524 udevadm[1022]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 02:07:33.372072 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:07:33.373310 systemd[1]: Finished systemd-random-seed.service. Dec 13 02:07:33.374522 systemd[1]: Reached target first-boot-complete.target. Dec 13 02:07:33.376864 systemd[1]: Starting systemd-udev-settle.service... Dec 13 02:07:33.384351 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:07:33.385455 systemd[1]: Finished systemd-sysusers.service. Dec 13 02:07:33.400470 systemd[1]: Finished systemd-journal-flush.service. Dec 13 02:07:33.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.814093 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 02:07:33.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.815000 audit: BPF prog-id=24 op=LOAD Dec 13 02:07:33.815000 audit: BPF prog-id=25 op=LOAD Dec 13 02:07:33.815000 audit: BPF prog-id=7 op=UNLOAD Dec 13 02:07:33.815000 audit: BPF prog-id=8 op=UNLOAD Dec 13 02:07:33.816172 systemd[1]: Starting systemd-udevd.service... Dec 13 02:07:33.830976 systemd-udevd[1024]: Using default interface naming scheme 'v252'. Dec 13 02:07:33.842646 systemd[1]: Started systemd-udevd.service. Dec 13 02:07:33.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.845000 audit: BPF prog-id=26 op=LOAD Dec 13 02:07:33.846081 systemd[1]: Starting systemd-networkd.service... Dec 13 02:07:33.852000 audit: BPF prog-id=27 op=LOAD Dec 13 02:07:33.853000 audit: BPF prog-id=28 op=LOAD Dec 13 02:07:33.853000 audit: BPF prog-id=29 op=LOAD Dec 13 02:07:33.853885 systemd[1]: Starting systemd-userdbd.service... Dec 13 02:07:33.858160 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 02:07:33.888023 systemd[1]: Started systemd-userdbd.service. Dec 13 02:07:33.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.894038 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 02:07:33.895905 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:07:33.902034 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:07:33.907000 audit[1036]: AVC avc: denied { confidentiality } for pid=1036 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:07:33.942083 systemd-networkd[1039]: lo: Link UP Dec 13 02:07:33.942094 systemd-networkd[1039]: lo: Gained carrier Dec 13 02:07:33.907000 audit[1036]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555a6b1a2d60 a1=337fc a2=7f5016ec0bc5 a3=5 items=110 ppid=1024 pid=1036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:07:33.907000 audit: CWD cwd="/" Dec 13 02:07:33.907000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=1 name=(null) inode=15383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=2 name=(null) inode=15383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=3 name=(null) inode=15384 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=4 name=(null) inode=15383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=5 name=(null) inode=15385 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=6 name=(null) inode=15383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=7 name=(null) inode=15386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=8 name=(null) inode=15386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=9 name=(null) inode=15387 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=10 name=(null) inode=15386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=11 name=(null) inode=15388 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=12 name=(null) inode=15386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=13 name=(null) inode=15389 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=14 name=(null) inode=15386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=15 name=(null) inode=15390 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=16 name=(null) inode=15386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=17 name=(null) inode=15391 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=18 name=(null) inode=15383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=19 name=(null) inode=15392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=20 name=(null) inode=15392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=21 name=(null) inode=15393 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=22 name=(null) inode=15392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=23 name=(null) inode=15394 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=24 name=(null) inode=15392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=25 name=(null) inode=15395 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=26 name=(null) inode=15392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=27 name=(null) inode=15396 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=28 name=(null) inode=15392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=29 name=(null) inode=15397 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=30 name=(null) inode=15383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=31 name=(null) inode=15398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=32 name=(null) inode=15398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=33 name=(null) inode=15399 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=34 name=(null) inode=15398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=35 name=(null) inode=15400 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=36 name=(null) inode=15398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=37 name=(null) inode=15401 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=38 name=(null) inode=15398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=39 name=(null) inode=15402 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=40 name=(null) inode=15398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=41 name=(null) inode=15403 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=42 name=(null) inode=15383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=43 name=(null) inode=15404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=44 name=(null) inode=15404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=45 name=(null) inode=15405 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=46 name=(null) inode=15404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=47 name=(null) inode=15406 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=48 name=(null) inode=15404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=49 name=(null) inode=15407 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=50 name=(null) inode=15404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=51 name=(null) inode=15408 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=52 name=(null) inode=15404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=53 name=(null) inode=15409 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=55 name=(null) inode=15410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=56 name=(null) inode=15410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=57 name=(null) inode=15411 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=58 name=(null) inode=15410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=59 name=(null) inode=15412 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=60 name=(null) inode=15410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=61 name=(null) inode=15413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=62 name=(null) inode=15413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=63 name=(null) inode=15414 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=64 name=(null) inode=15413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=65 name=(null) inode=15415 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=66 name=(null) inode=15413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=67 name=(null) inode=15416 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=68 name=(null) inode=15413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=69 name=(null) inode=15417 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=70 name=(null) inode=15413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=71 name=(null) inode=15418 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=72 name=(null) inode=15410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=73 name=(null) inode=15419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=74 name=(null) inode=15419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=75 name=(null) inode=15420 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=76 name=(null) inode=15419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=77 name=(null) inode=15421 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=78 name=(null) inode=15419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=79 name=(null) inode=15422 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=80 name=(null) inode=15419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=81 name=(null) inode=15423 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=82 name=(null) inode=15419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=83 name=(null) inode=15424 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=84 name=(null) inode=15410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=85 name=(null) inode=15425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=86 name=(null) inode=15425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=87 name=(null) inode=15426 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=88 name=(null) inode=15425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=89 name=(null) inode=15427 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=90 name=(null) inode=15425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=91 name=(null) inode=15428 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=92 name=(null) inode=15425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=93 name=(null) inode=15429 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=94 name=(null) inode=15425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=95 name=(null) inode=15430 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=96 name=(null) inode=15410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=97 name=(null) inode=15431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=98 name=(null) inode=15431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=99 name=(null) inode=15432 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=100 name=(null) inode=15431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=101 name=(null) inode=15433 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=102 name=(null) inode=15431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=103 name=(null) inode=15434 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=104 name=(null) inode=15431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=105 name=(null) inode=15435 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=106 name=(null) inode=15431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=107 name=(null) inode=15436 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PATH item=109 name=(null) inode=14853 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:33.907000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 02:07:33.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:33.942465 systemd-networkd[1039]: Enumeration completed Dec 13 02:07:33.942575 systemd[1]: Started systemd-networkd.service. Dec 13 02:07:33.943244 systemd-networkd[1039]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:07:33.944361 systemd-networkd[1039]: eth0: Link UP Dec 13 02:07:33.944365 systemd-networkd[1039]: eth0: Gained carrier Dec 13 02:07:33.951059 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 02:07:33.955039 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:07:33.956039 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 02:07:33.956211 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 02:07:33.956315 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 02:07:33.967220 systemd-networkd[1039]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 02:07:33.991073 kernel: kvm: Nested Virtualization enabled Dec 13 02:07:33.991212 kernel: SVM: kvm: Nested Paging enabled Dec 13 02:07:33.991244 kernel: SVM: Virtual VMLOAD VMSAVE supported Dec 13 02:07:33.991272 kernel: SVM: Virtual GIF supported Dec 13 02:07:34.011047 kernel: EDAC MC: Ver: 3.0.0 Dec 13 02:07:34.035393 systemd[1]: Finished systemd-udev-settle.service. Dec 13 02:07:34.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.037491 systemd[1]: Starting lvm2-activation-early.service... Dec 13 02:07:34.044024 lvm[1059]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:07:34.068767 systemd[1]: Finished lvm2-activation-early.service. Dec 13 02:07:34.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.069890 systemd[1]: Reached target cryptsetup.target. Dec 13 02:07:34.071754 systemd[1]: Starting lvm2-activation.service... Dec 13 02:07:34.074769 lvm[1060]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:07:34.098595 systemd[1]: Finished lvm2-activation.service. Dec 13 02:07:34.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.099553 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:07:34.100439 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:07:34.100462 systemd[1]: Reached target local-fs.target. Dec 13 02:07:34.101292 systemd[1]: Reached target machines.target. Dec 13 02:07:34.103038 systemd[1]: Starting ldconfig.service... Dec 13 02:07:34.104164 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:07:34.104201 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:07:34.105073 systemd[1]: Starting systemd-boot-update.service... Dec 13 02:07:34.106975 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 02:07:34.109286 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 02:07:34.111763 systemd[1]: Starting systemd-sysext.service... Dec 13 02:07:34.112987 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1062 (bootctl) Dec 13 02:07:34.114114 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 02:07:34.117240 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 02:07:34.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.126319 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 02:07:34.131254 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 02:07:34.131449 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 02:07:34.142038 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 02:07:34.153507 systemd-fsck[1069]: fsck.fat 4.2 (2021-01-31) Dec 13 02:07:34.153507 systemd-fsck[1069]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 02:07:34.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.155277 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 02:07:34.158060 systemd[1]: Mounting boot.mount... Dec 13 02:07:34.423314 systemd[1]: Mounted boot.mount. Dec 13 02:07:34.430060 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:07:34.435863 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:07:34.436454 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 02:07:34.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.437720 systemd[1]: Finished systemd-boot-update.service. Dec 13 02:07:34.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.446029 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 02:07:34.449218 (sd-sysext)[1075]: Using extensions 'kubernetes'. Dec 13 02:07:34.449545 (sd-sysext)[1075]: Merged extensions into '/usr'. Dec 13 02:07:34.464129 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:07:34.465418 systemd[1]: Mounting usr-share-oem.mount... Dec 13 02:07:34.466387 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:07:34.467407 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:07:34.469217 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:07:34.470989 systemd[1]: Starting modprobe@loop.service... Dec 13 02:07:34.471759 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:07:34.471858 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:07:34.471949 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:07:34.474250 systemd[1]: Mounted usr-share-oem.mount. Dec 13 02:07:34.475293 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:07:34.475436 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:07:34.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.476709 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:07:34.476805 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:07:34.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.478137 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:07:34.478231 systemd[1]: Finished modprobe@loop.service. Dec 13 02:07:34.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.479725 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:07:34.479821 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:07:34.481116 systemd[1]: Finished systemd-sysext.service. Dec 13 02:07:34.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.483072 systemd[1]: Starting ensure-sysext.service... Dec 13 02:07:34.484545 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 02:07:34.489862 systemd[1]: Reloading. Dec 13 02:07:34.490080 ldconfig[1061]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:07:34.496479 systemd-tmpfiles[1082]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 02:07:34.498191 systemd-tmpfiles[1082]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:07:34.500488 systemd-tmpfiles[1082]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:07:34.538162 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-12-13T02:07:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:07:34.538190 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-12-13T02:07:34Z" level=info msg="torcx already run" Dec 13 02:07:34.596541 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:07:34.596558 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:07:34.612992 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:07:34.664000 audit: BPF prog-id=30 op=LOAD Dec 13 02:07:34.664000 audit: BPF prog-id=27 op=UNLOAD Dec 13 02:07:34.664000 audit: BPF prog-id=31 op=LOAD Dec 13 02:07:34.664000 audit: BPF prog-id=32 op=LOAD Dec 13 02:07:34.664000 audit: BPF prog-id=28 op=UNLOAD Dec 13 02:07:34.664000 audit: BPF prog-id=29 op=UNLOAD Dec 13 02:07:34.665000 audit: BPF prog-id=33 op=LOAD Dec 13 02:07:34.665000 audit: BPF prog-id=26 op=UNLOAD Dec 13 02:07:34.667000 audit: BPF prog-id=34 op=LOAD Dec 13 02:07:34.667000 audit: BPF prog-id=21 op=UNLOAD Dec 13 02:07:34.668000 audit: BPF prog-id=35 op=LOAD Dec 13 02:07:34.668000 audit: BPF prog-id=36 op=LOAD Dec 13 02:07:34.668000 audit: BPF prog-id=22 op=UNLOAD Dec 13 02:07:34.668000 audit: BPF prog-id=23 op=UNLOAD Dec 13 02:07:34.668000 audit: BPF prog-id=37 op=LOAD Dec 13 02:07:34.668000 audit: BPF prog-id=38 op=LOAD Dec 13 02:07:34.668000 audit: BPF prog-id=24 op=UNLOAD Dec 13 02:07:34.668000 audit: BPF prog-id=25 op=UNLOAD Dec 13 02:07:34.670243 systemd[1]: Finished ldconfig.service. Dec 13 02:07:34.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.672109 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 02:07:34.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.675549 systemd[1]: Starting audit-rules.service... Dec 13 02:07:34.677156 systemd[1]: Starting clean-ca-certificates.service... Dec 13 02:07:34.679089 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 02:07:34.680000 audit: BPF prog-id=39 op=LOAD Dec 13 02:07:34.681549 systemd[1]: Starting systemd-resolved.service... Dec 13 02:07:34.683000 audit: BPF prog-id=40 op=LOAD Dec 13 02:07:34.684056 systemd[1]: Starting systemd-timesyncd.service... Dec 13 02:07:34.685698 systemd[1]: Starting systemd-update-utmp.service... Dec 13 02:07:34.687033 systemd[1]: Finished clean-ca-certificates.service. Dec 13 02:07:34.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.690000 audit[1154]: SYSTEM_BOOT pid=1154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.689944 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:07:34.692972 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:07:34.693317 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:07:34.694542 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:07:34.696306 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:07:34.698180 systemd[1]: Starting modprobe@loop.service... Dec 13 02:07:34.699009 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:07:34.699160 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:07:34.699286 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:07:34.699392 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:07:34.700568 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:07:34.700688 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:07:34.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.702120 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:07:34.702220 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:07:34.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.703615 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:07:34.703722 systemd[1]: Finished modprobe@loop.service. Dec 13 02:07:34.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.705681 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:07:34.705803 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:07:34.706250 systemd[1]: Finished systemd-update-utmp.service. Dec 13 02:07:34.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:34.708854 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:07:34.709111 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:07:34.710115 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:07:34.711831 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:07:34.713506 systemd[1]: Starting modprobe@loop.service... Dec 13 02:07:34.715000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:07:34.715000 audit[1169]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe46804d10 a2=420 a3=0 items=0 ppid=1143 pid=1169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:07:34.715000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:07:34.716093 augenrules[1169]: No rules Dec 13 02:07:34.717161 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:07:34.717305 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:07:34.717404 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:07:34.717474 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:07:34.718390 systemd[1]: Finished audit-rules.service. Dec 13 02:07:34.719532 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 02:07:34.720840 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:07:34.720961 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:07:34.722166 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:07:34.722270 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:07:34.723412 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:07:34.723512 systemd[1]: Finished modprobe@loop.service. Dec 13 02:07:34.724626 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:07:34.724720 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:07:34.725962 systemd[1]: Starting systemd-update-done.service... Dec 13 02:07:34.729005 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:07:34.729645 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:07:34.731062 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:07:34.732931 systemd[1]: Starting modprobe@drm.service... Dec 13 02:07:34.734603 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:07:34.736289 systemd[1]: Starting modprobe@loop.service... Dec 13 02:07:34.737130 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:07:34.737237 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:07:34.738277 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:07:34.739486 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:07:34.739592 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:07:34.740571 systemd[1]: Finished systemd-update-done.service. Dec 13 02:07:34.741838 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:07:34.742091 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:07:34.743223 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:07:34.743421 systemd[1]: Finished modprobe@drm.service. Dec 13 02:07:34.744518 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:07:34.744725 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:07:34.745789 systemd-resolved[1149]: Positive Trust Anchors: Dec 13 02:07:34.745803 systemd-resolved[1149]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:07:34.745831 systemd-resolved[1149]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:07:34.746282 systemd[1]: Started systemd-timesyncd.service. Dec 13 02:07:35.410956 systemd-timesyncd[1153]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 02:07:35.410996 systemd-timesyncd[1153]: Initial clock synchronization to Fri 2024-12-13 02:07:35.410899 UTC. Dec 13 02:07:35.412032 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:07:35.412261 systemd[1]: Finished modprobe@loop.service. Dec 13 02:07:35.414606 systemd[1]: Finished ensure-sysext.service. Dec 13 02:07:35.416365 systemd[1]: Reached target time-set.target. Dec 13 02:07:35.417245 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:07:35.417270 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:07:35.417664 systemd-resolved[1149]: Defaulting to hostname 'linux'. Dec 13 02:07:35.419027 systemd[1]: Started systemd-resolved.service. Dec 13 02:07:35.419994 systemd[1]: Reached target network.target. Dec 13 02:07:35.420775 systemd[1]: Reached target nss-lookup.target. Dec 13 02:07:35.421590 systemd[1]: Reached target sysinit.target. Dec 13 02:07:35.422437 systemd[1]: Started motdgen.path. Dec 13 02:07:35.423146 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 02:07:35.424368 systemd[1]: Started logrotate.timer. Dec 13 02:07:35.425158 systemd[1]: Started mdadm.timer. Dec 13 02:07:35.425816 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 02:07:35.426670 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:07:35.426694 systemd[1]: Reached target paths.target. Dec 13 02:07:35.427433 systemd[1]: Reached target timers.target. Dec 13 02:07:35.428467 systemd[1]: Listening on dbus.socket. Dec 13 02:07:35.430263 systemd[1]: Starting docker.socket... Dec 13 02:07:35.433003 systemd[1]: Listening on sshd.socket. Dec 13 02:07:35.433834 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:07:35.434164 systemd[1]: Listening on docker.socket. Dec 13 02:07:35.434964 systemd[1]: Reached target sockets.target. Dec 13 02:07:35.435754 systemd[1]: Reached target basic.target. Dec 13 02:07:35.436545 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:07:35.436568 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:07:35.437345 systemd[1]: Starting containerd.service... Dec 13 02:07:35.438906 systemd[1]: Starting dbus.service... Dec 13 02:07:35.440675 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:07:35.442448 systemd[1]: Starting extend-filesystems.service... Dec 13 02:07:35.443361 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 02:07:35.444284 systemd[1]: Starting motdgen.service... Dec 13 02:07:35.445928 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:07:35.447682 systemd[1]: Starting sshd-keygen.service... Dec 13 02:07:35.450378 systemd[1]: Starting systemd-logind.service... Dec 13 02:07:35.452353 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:07:35.454240 jq[1185]: false Dec 13 02:07:35.452427 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 02:07:35.452904 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 02:07:35.453693 systemd[1]: Starting update-engine.service... Dec 13 02:07:35.457981 jq[1202]: true Dec 13 02:07:35.455412 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 02:07:35.457668 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:07:35.457828 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:07:35.458119 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:07:35.458270 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:07:35.463520 jq[1204]: true Dec 13 02:07:35.464032 extend-filesystems[1186]: Found loop1 Dec 13 02:07:35.465162 extend-filesystems[1186]: Found sr0 Dec 13 02:07:35.465162 extend-filesystems[1186]: Found vda Dec 13 02:07:35.465162 extend-filesystems[1186]: Found vda1 Dec 13 02:07:35.465162 extend-filesystems[1186]: Found vda2 Dec 13 02:07:35.465162 extend-filesystems[1186]: Found vda3 Dec 13 02:07:35.465162 extend-filesystems[1186]: Found usr Dec 13 02:07:35.465162 extend-filesystems[1186]: Found vda4 Dec 13 02:07:35.465162 extend-filesystems[1186]: Found vda6 Dec 13 02:07:35.465162 extend-filesystems[1186]: Found vda7 Dec 13 02:07:35.465162 extend-filesystems[1186]: Found vda9 Dec 13 02:07:35.465162 extend-filesystems[1186]: Checking size of /dev/vda9 Dec 13 02:07:35.471336 systemd[1]: Started dbus.service. Dec 13 02:07:35.471159 dbus-daemon[1184]: [system] SELinux support is enabled Dec 13 02:07:35.473976 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:07:35.474003 systemd[1]: Reached target system-config.target. Dec 13 02:07:35.474544 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:07:35.474563 systemd[1]: Reached target user-config.target. Dec 13 02:07:35.484503 env[1205]: time="2024-12-13T02:07:35.484437133Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 02:07:35.487504 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:07:35.487653 systemd[1]: Finished motdgen.service. Dec 13 02:07:35.494673 systemd-logind[1193]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 02:07:35.494689 systemd-logind[1193]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:07:35.503104 systemd-logind[1193]: New seat seat0. Dec 13 02:07:35.507317 extend-filesystems[1186]: Resized partition /dev/vda9 Dec 13 02:07:35.509498 extend-filesystems[1235]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 02:07:35.517900 update_engine[1198]: I1213 02:07:35.506356 1198 main.cc:92] Flatcar Update Engine starting Dec 13 02:07:35.517900 update_engine[1198]: I1213 02:07:35.511065 1198 update_check_scheduler.cc:74] Next update check in 4m28s Dec 13 02:07:35.518079 env[1205]: time="2024-12-13T02:07:35.512902556Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:07:35.518079 env[1205]: time="2024-12-13T02:07:35.513028382Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:35.518079 env[1205]: time="2024-12-13T02:07:35.513974246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:07:35.518079 env[1205]: time="2024-12-13T02:07:35.513994093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:35.518079 env[1205]: time="2024-12-13T02:07:35.514160776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:07:35.518079 env[1205]: time="2024-12-13T02:07:35.514174411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:35.518079 env[1205]: time="2024-12-13T02:07:35.514185382Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 02:07:35.518079 env[1205]: time="2024-12-13T02:07:35.514194178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:35.518079 env[1205]: time="2024-12-13T02:07:35.514264681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:35.518079 env[1205]: time="2024-12-13T02:07:35.517315101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:35.511028 systemd[1]: Started update-engine.service. Dec 13 02:07:35.518361 env[1205]: time="2024-12-13T02:07:35.517427101Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:07:35.518361 env[1205]: time="2024-12-13T02:07:35.517451447Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:07:35.518601 env[1205]: time="2024-12-13T02:07:35.517937759Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 02:07:35.518660 env[1205]: time="2024-12-13T02:07:35.518602125Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:07:35.519245 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 02:07:35.522976 systemd[1]: Started systemd-logind.service. Dec 13 02:07:35.527046 systemd[1]: Started locksmithd.service. Dec 13 02:07:35.539588 bash[1234]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:07:35.559922 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 02:07:35.559979 extend-filesystems[1235]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 02:07:35.559979 extend-filesystems[1235]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 02:07:35.559979 extend-filesystems[1235]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 02:07:35.539900 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 02:07:35.564572 extend-filesystems[1186]: Resized filesystem in /dev/vda9 Dec 13 02:07:35.560665 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:07:35.560799 systemd[1]: Finished extend-filesystems.service. Dec 13 02:07:35.565901 env[1205]: time="2024-12-13T02:07:35.565846154Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:07:35.565943 env[1205]: time="2024-12-13T02:07:35.565899735Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:07:35.565943 env[1205]: time="2024-12-13T02:07:35.565912940Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:07:35.566028 env[1205]: time="2024-12-13T02:07:35.565989754Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:07:35.566061 env[1205]: time="2024-12-13T02:07:35.566028526Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:07:35.566061 env[1205]: time="2024-12-13T02:07:35.566041411Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:07:35.566061 env[1205]: time="2024-12-13T02:07:35.566052561Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:07:35.566153 env[1205]: time="2024-12-13T02:07:35.566065195Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:07:35.566153 env[1205]: time="2024-12-13T02:07:35.566087928Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 02:07:35.566153 env[1205]: time="2024-12-13T02:07:35.566100672Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:07:35.566153 env[1205]: time="2024-12-13T02:07:35.566113185Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:07:35.566153 env[1205]: time="2024-12-13T02:07:35.566123755Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:07:35.566296 env[1205]: time="2024-12-13T02:07:35.566241987Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:07:35.566333 env[1205]: time="2024-12-13T02:07:35.566314593Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:07:35.566627 env[1205]: time="2024-12-13T02:07:35.566583748Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:07:35.566627 env[1205]: time="2024-12-13T02:07:35.566626989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:07:35.566683 env[1205]: time="2024-12-13T02:07:35.566639693Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:07:35.566714 env[1205]: time="2024-12-13T02:07:35.566698994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:07:35.566736 env[1205]: time="2024-12-13T02:07:35.566714523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:07:35.566736 env[1205]: time="2024-12-13T02:07:35.566726956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:07:35.566811 env[1205]: time="2024-12-13T02:07:35.566737446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:07:35.566811 env[1205]: time="2024-12-13T02:07:35.566807768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:07:35.566861 env[1205]: time="2024-12-13T02:07:35.566818929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:07:35.566861 env[1205]: time="2024-12-13T02:07:35.566829809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:07:35.566861 env[1205]: time="2024-12-13T02:07:35.566839447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:07:35.566924 env[1205]: time="2024-12-13T02:07:35.566866488Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:07:35.566982 env[1205]: time="2024-12-13T02:07:35.566961516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:07:35.566982 env[1205]: time="2024-12-13T02:07:35.566979309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:07:35.567026 env[1205]: time="2024-12-13T02:07:35.566991352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:07:35.567026 env[1205]: time="2024-12-13T02:07:35.567002112Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:07:35.567026 env[1205]: time="2024-12-13T02:07:35.567016048Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 02:07:35.567026 env[1205]: time="2024-12-13T02:07:35.567025466Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:07:35.567104 env[1205]: time="2024-12-13T02:07:35.567044902Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 02:07:35.567104 env[1205]: time="2024-12-13T02:07:35.567079648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:07:35.567304 env[1205]: time="2024-12-13T02:07:35.567255447Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:07:35.567304 env[1205]: time="2024-12-13T02:07:35.567305261Z" level=info msg="Connect containerd service" Dec 13 02:07:35.567888 env[1205]: time="2024-12-13T02:07:35.567333474Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:07:35.567888 env[1205]: time="2024-12-13T02:07:35.567819385Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:07:35.568059 env[1205]: time="2024-12-13T02:07:35.568042864Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:07:35.568082 env[1205]: time="2024-12-13T02:07:35.568032104Z" level=info msg="Start subscribing containerd event" Dec 13 02:07:35.568082 env[1205]: time="2024-12-13T02:07:35.568079032Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:07:35.568124 env[1205]: time="2024-12-13T02:07:35.568087979Z" level=info msg="Start recovering state" Dec 13 02:07:35.568167 env[1205]: time="2024-12-13T02:07:35.568152980Z" level=info msg="Start event monitor" Dec 13 02:07:35.568190 env[1205]: time="2024-12-13T02:07:35.568168970Z" level=info msg="Start snapshots syncer" Dec 13 02:07:35.568190 env[1205]: time="2024-12-13T02:07:35.568178749Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:07:35.568190 env[1205]: time="2024-12-13T02:07:35.568186864Z" level=info msg="Start streaming server" Dec 13 02:07:35.568176 systemd[1]: Started containerd.service. Dec 13 02:07:35.570102 env[1205]: time="2024-12-13T02:07:35.569508172Z" level=info msg="containerd successfully booted in 0.085608s" Dec 13 02:07:35.577946 locksmithd[1237]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:07:36.182422 systemd-networkd[1039]: eth0: Gained IPv6LL Dec 13 02:07:36.184098 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:07:36.185433 systemd[1]: Reached target network-online.target. Dec 13 02:07:36.187596 systemd[1]: Starting kubelet.service... Dec 13 02:07:36.716418 systemd[1]: Started kubelet.service. Dec 13 02:07:37.065143 sshd_keygen[1222]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:07:37.084090 systemd[1]: Finished sshd-keygen.service. Dec 13 02:07:37.086868 systemd[1]: Starting issuegen.service... Dec 13 02:07:37.092012 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:07:37.092170 systemd[1]: Finished issuegen.service. Dec 13 02:07:37.094500 systemd[1]: Starting systemd-user-sessions.service... Dec 13 02:07:37.099997 systemd[1]: Finished systemd-user-sessions.service. Dec 13 02:07:37.102354 systemd[1]: Started getty@tty1.service. Dec 13 02:07:37.104384 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 02:07:37.105537 systemd[1]: Reached target getty.target. Dec 13 02:07:37.106522 systemd[1]: Reached target multi-user.target. Dec 13 02:07:37.108878 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 02:07:37.117174 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 02:07:37.117393 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 02:07:37.118595 systemd[1]: Startup finished in 599ms (kernel) + 4.191s (initrd) + 5.553s (userspace) = 10.344s. Dec 13 02:07:37.161242 kubelet[1250]: E1213 02:07:37.161189 1250 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:07:37.162819 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:07:37.162936 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:07:45.221187 systemd[1]: Created slice system-sshd.slice. Dec 13 02:07:45.222117 systemd[1]: Started sshd@0-10.0.0.138:22-10.0.0.1:39832.service. Dec 13 02:07:45.265691 sshd[1274]: Accepted publickey for core from 10.0.0.1 port 39832 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:07:45.267014 sshd[1274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:07:45.274423 systemd-logind[1193]: New session 1 of user core. Dec 13 02:07:45.275320 systemd[1]: Created slice user-500.slice. Dec 13 02:07:45.276248 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:07:45.282742 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:07:45.283714 systemd[1]: Starting user@500.service... Dec 13 02:07:45.286090 (systemd)[1277]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:07:45.348873 systemd[1277]: Queued start job for default target default.target. Dec 13 02:07:45.349256 systemd[1277]: Reached target paths.target. Dec 13 02:07:45.349280 systemd[1277]: Reached target sockets.target. Dec 13 02:07:45.349295 systemd[1277]: Reached target timers.target. Dec 13 02:07:45.349309 systemd[1277]: Reached target basic.target. Dec 13 02:07:45.349351 systemd[1277]: Reached target default.target. Dec 13 02:07:45.349379 systemd[1277]: Startup finished in 58ms. Dec 13 02:07:45.349519 systemd[1]: Started user@500.service. Dec 13 02:07:45.350553 systemd[1]: Started session-1.scope. Dec 13 02:07:45.400829 systemd[1]: Started sshd@1-10.0.0.138:22-10.0.0.1:39840.service. Dec 13 02:07:45.441978 sshd[1286]: Accepted publickey for core from 10.0.0.1 port 39840 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:07:45.443195 sshd[1286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:07:45.446518 systemd-logind[1193]: New session 2 of user core. Dec 13 02:07:45.447474 systemd[1]: Started session-2.scope. Dec 13 02:07:45.499994 sshd[1286]: pam_unix(sshd:session): session closed for user core Dec 13 02:07:45.502709 systemd[1]: sshd@1-10.0.0.138:22-10.0.0.1:39840.service: Deactivated successfully. Dec 13 02:07:45.503191 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:07:45.503669 systemd-logind[1193]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:07:45.504528 systemd[1]: Started sshd@2-10.0.0.138:22-10.0.0.1:39850.service. Dec 13 02:07:45.505632 systemd-logind[1193]: Removed session 2. Dec 13 02:07:45.543239 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 39850 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:07:45.544390 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:07:45.547467 systemd-logind[1193]: New session 3 of user core. Dec 13 02:07:45.548254 systemd[1]: Started session-3.scope. Dec 13 02:07:45.595928 sshd[1292]: pam_unix(sshd:session): session closed for user core Dec 13 02:07:45.598546 systemd[1]: sshd@2-10.0.0.138:22-10.0.0.1:39850.service: Deactivated successfully. Dec 13 02:07:45.599066 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:07:45.599552 systemd-logind[1193]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:07:45.600441 systemd[1]: Started sshd@3-10.0.0.138:22-10.0.0.1:39854.service. Dec 13 02:07:45.601094 systemd-logind[1193]: Removed session 3. Dec 13 02:07:45.639371 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 39854 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:07:45.640181 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:07:45.643373 systemd-logind[1193]: New session 4 of user core. Dec 13 02:07:45.643991 systemd[1]: Started session-4.scope. Dec 13 02:07:45.695577 sshd[1298]: pam_unix(sshd:session): session closed for user core Dec 13 02:07:45.698680 systemd[1]: Started sshd@4-10.0.0.138:22-10.0.0.1:39870.service. Dec 13 02:07:45.699246 systemd[1]: sshd@3-10.0.0.138:22-10.0.0.1:39854.service: Deactivated successfully. Dec 13 02:07:45.699824 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:07:45.700359 systemd-logind[1193]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:07:45.701168 systemd-logind[1193]: Removed session 4. Dec 13 02:07:45.737778 sshd[1303]: Accepted publickey for core from 10.0.0.1 port 39870 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:07:45.738776 sshd[1303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:07:45.741664 systemd-logind[1193]: New session 5 of user core. Dec 13 02:07:45.742569 systemd[1]: Started session-5.scope. Dec 13 02:07:45.806664 sudo[1307]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:07:45.806837 sudo[1307]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:07:45.817609 systemd[1]: Starting coreos-metadata.service... Dec 13 02:07:45.823468 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 02:07:45.823628 systemd[1]: Finished coreos-metadata.service. Dec 13 02:07:46.589433 systemd[1]: Stopped kubelet.service. Dec 13 02:07:46.591384 systemd[1]: Starting kubelet.service... Dec 13 02:07:46.606646 systemd[1]: Reloading. Dec 13 02:07:46.677298 /usr/lib/systemd/system-generators/torcx-generator[1372]: time="2024-12-13T02:07:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:07:46.677635 /usr/lib/systemd/system-generators/torcx-generator[1372]: time="2024-12-13T02:07:46Z" level=info msg="torcx already run" Dec 13 02:07:46.896918 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:07:46.896934 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:07:46.913593 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:07:46.985701 systemd[1]: Started kubelet.service. Dec 13 02:07:46.989199 systemd[1]: Stopping kubelet.service... Dec 13 02:07:46.989493 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:07:46.989689 systemd[1]: Stopped kubelet.service. Dec 13 02:07:46.991081 systemd[1]: Starting kubelet.service... Dec 13 02:07:47.079419 systemd[1]: Started kubelet.service. Dec 13 02:07:47.115175 kubelet[1421]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:07:47.115175 kubelet[1421]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:07:47.115175 kubelet[1421]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:07:47.115568 kubelet[1421]: I1213 02:07:47.115253 1421 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:07:47.389053 kubelet[1421]: I1213 02:07:47.388931 1421 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 02:07:47.389053 kubelet[1421]: I1213 02:07:47.388962 1421 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:07:47.389236 kubelet[1421]: I1213 02:07:47.389171 1421 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 02:07:47.398833 kubelet[1421]: I1213 02:07:47.398793 1421 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:07:47.412354 kubelet[1421]: I1213 02:07:47.412300 1421 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:07:47.413035 kubelet[1421]: I1213 02:07:47.412990 1421 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:07:47.413194 kubelet[1421]: I1213 02:07:47.413027 1421 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.138","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:07:47.413299 kubelet[1421]: I1213 02:07:47.413197 1421 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:07:47.413299 kubelet[1421]: I1213 02:07:47.413206 1421 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:07:47.413365 kubelet[1421]: I1213 02:07:47.413345 1421 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:07:47.414063 kubelet[1421]: I1213 02:07:47.414036 1421 kubelet.go:400] "Attempting to sync node with API server" Dec 13 02:07:47.414063 kubelet[1421]: I1213 02:07:47.414055 1421 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:07:47.414137 kubelet[1421]: I1213 02:07:47.414081 1421 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:07:47.414137 kubelet[1421]: I1213 02:07:47.414100 1421 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:07:47.414203 kubelet[1421]: E1213 02:07:47.414124 1421 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:07:47.414203 kubelet[1421]: E1213 02:07:47.414184 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:07:47.418476 kubelet[1421]: I1213 02:07:47.418060 1421 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:07:47.423526 kubelet[1421]: I1213 02:07:47.423486 1421 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:07:47.423653 kubelet[1421]: W1213 02:07:47.423571 1421 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:07:47.424533 kubelet[1421]: I1213 02:07:47.424514 1421 server.go:1264] "Started kubelet" Dec 13 02:07:47.424809 kubelet[1421]: I1213 02:07:47.424750 1421 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:07:47.425140 kubelet[1421]: I1213 02:07:47.425115 1421 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:07:47.425195 kubelet[1421]: I1213 02:07:47.424693 1421 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:07:47.428152 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 02:07:47.428436 kubelet[1421]: I1213 02:07:47.428420 1421 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:07:47.429291 kubelet[1421]: I1213 02:07:47.429273 1421 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:07:47.429469 kubelet[1421]: I1213 02:07:47.429451 1421 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 02:07:47.429551 kubelet[1421]: I1213 02:07:47.429492 1421 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:07:47.432956 kubelet[1421]: I1213 02:07:47.432930 1421 server.go:455] "Adding debug handlers to kubelet server" Dec 13 02:07:47.443765 kubelet[1421]: E1213 02:07:47.443721 1421 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.138\" not found" node="10.0.0.138" Dec 13 02:07:47.445621 kubelet[1421]: E1213 02:07:47.445591 1421 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:07:47.445874 kubelet[1421]: I1213 02:07:47.445847 1421 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:07:47.446031 kubelet[1421]: I1213 02:07:47.445949 1421 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:07:47.447407 kubelet[1421]: I1213 02:07:47.447382 1421 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:07:47.481361 kubelet[1421]: I1213 02:07:47.481328 1421 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:07:47.481361 kubelet[1421]: I1213 02:07:47.481345 1421 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:07:47.481361 kubelet[1421]: I1213 02:07:47.481362 1421 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:07:47.530151 kubelet[1421]: I1213 02:07:47.530114 1421 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.138" Dec 13 02:07:47.723603 kubelet[1421]: I1213 02:07:47.723464 1421 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.138" Dec 13 02:07:47.723741 kubelet[1421]: I1213 02:07:47.723721 1421 policy_none.go:49] "None policy: Start" Dec 13 02:07:47.724388 kubelet[1421]: I1213 02:07:47.724375 1421 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:07:47.724447 kubelet[1421]: I1213 02:07:47.724400 1421 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:07:47.725634 kubelet[1421]: I1213 02:07:47.725617 1421 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 02:07:47.725991 env[1205]: time="2024-12-13T02:07:47.725944770Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:07:47.726393 kubelet[1421]: I1213 02:07:47.726364 1421 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 02:07:47.730164 systemd[1]: Created slice kubepods.slice. Dec 13 02:07:47.731858 sudo[1307]: pam_unix(sudo:session): session closed for user root Dec 13 02:07:47.733077 sshd[1303]: pam_unix(sshd:session): session closed for user core Dec 13 02:07:47.735451 systemd[1]: sshd@4-10.0.0.138:22-10.0.0.1:39870.service: Deactivated successfully. Dec 13 02:07:47.736162 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:07:47.736988 systemd-logind[1193]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:07:47.737397 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 02:07:47.738029 systemd-logind[1193]: Removed session 5. Dec 13 02:07:47.740269 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 02:07:47.746881 kubelet[1421]: I1213 02:07:47.746852 1421 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:07:47.747068 kubelet[1421]: I1213 02:07:47.747029 1421 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:07:47.747261 kubelet[1421]: I1213 02:07:47.747248 1421 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:07:47.768428 kubelet[1421]: I1213 02:07:47.768359 1421 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:07:47.769360 kubelet[1421]: I1213 02:07:47.769343 1421 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:07:47.769477 kubelet[1421]: I1213 02:07:47.769366 1421 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:07:47.769477 kubelet[1421]: I1213 02:07:47.769394 1421 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 02:07:47.769477 kubelet[1421]: E1213 02:07:47.769436 1421 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 02:07:48.390914 kubelet[1421]: I1213 02:07:48.390846 1421 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 02:07:48.391372 kubelet[1421]: W1213 02:07:48.391122 1421 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:07:48.391372 kubelet[1421]: W1213 02:07:48.391142 1421 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:07:48.391372 kubelet[1421]: W1213 02:07:48.391153 1421 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:07:48.414734 kubelet[1421]: E1213 02:07:48.414685 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:07:48.414734 kubelet[1421]: I1213 02:07:48.414689 1421 apiserver.go:52] "Watching apiserver" Dec 13 02:07:48.418033 kubelet[1421]: I1213 02:07:48.417973 1421 topology_manager.go:215] "Topology Admit Handler" podUID="64a69962-a54c-47c8-9317-23f7ce013b1e" podNamespace="kube-system" podName="cilium-kv6tr" Dec 13 02:07:48.418182 kubelet[1421]: I1213 02:07:48.418163 1421 topology_manager.go:215] "Topology Admit Handler" podUID="1d9b9ed7-0d7c-4b4d-9fa8-3cd32515a0a7" podNamespace="kube-system" podName="kube-proxy-jr2gw" Dec 13 02:07:48.424049 systemd[1]: Created slice kubepods-besteffort-pod1d9b9ed7_0d7c_4b4d_9fa8_3cd32515a0a7.slice. Dec 13 02:07:48.429806 kubelet[1421]: I1213 02:07:48.429774 1421 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 02:07:48.435850 kubelet[1421]: I1213 02:07:48.435793 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-host-proc-sys-kernel\") pod \"cilium-kv6tr\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " pod="kube-system/cilium-kv6tr" Dec 13 02:07:48.435850 kubelet[1421]: I1213 02:07:48.435839 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d9b9ed7-0d7c-4b4d-9fa8-3cd32515a0a7-xtables-lock\") pod \"kube-proxy-jr2gw\" (UID: \"1d9b9ed7-0d7c-4b4d-9fa8-3cd32515a0a7\") " pod="kube-system/kube-proxy-jr2gw" Dec 13 02:07:48.435967 kubelet[1421]: I1213 02:07:48.435860 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-cni-path\") pod \"cilium-kv6tr\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " pod="kube-system/cilium-kv6tr" Dec 13 02:07:48.435967 kubelet[1421]: I1213 02:07:48.435892 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64a69962-a54c-47c8-9317-23f7ce013b1e-cilium-config-path\") pod \"cilium-kv6tr\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " pod="kube-system/cilium-kv6tr" Dec 13 02:07:48.435967 kubelet[1421]: I1213 02:07:48.435915 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-host-proc-sys-net\") pod \"cilium-kv6tr\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " pod="kube-system/cilium-kv6tr" Dec 13 02:07:48.435967 kubelet[1421]: I1213 02:07:48.435939 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnn9d\" (UniqueName: \"kubernetes.io/projected/64a69962-a54c-47c8-9317-23f7ce013b1e-kube-api-access-hnn9d\") pod \"cilium-kv6tr\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " pod="kube-system/cilium-kv6tr" Dec 13 02:07:48.435967 kubelet[1421]: I1213 02:07:48.435958 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-cilium-cgroup\") pod \"cilium-kv6tr\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " pod="kube-system/cilium-kv6tr" Dec 13 02:07:48.436114 kubelet[1421]: I1213 02:07:48.435976 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7cjg\" (UniqueName: \"kubernetes.io/projected/1d9b9ed7-0d7c-4b4d-9fa8-3cd32515a0a7-kube-api-access-b7cjg\") pod \"kube-proxy-jr2gw\" (UID: \"1d9b9ed7-0d7c-4b4d-9fa8-3cd32515a0a7\") " pod="kube-system/kube-proxy-jr2gw" Dec 13 02:07:48.436114 kubelet[1421]: I1213 02:07:48.435991 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d9b9ed7-0d7c-4b4d-9fa8-3cd32515a0a7-lib-modules\") pod \"kube-proxy-jr2gw\" (UID: \"1d9b9ed7-0d7c-4b4d-9fa8-3cd32515a0a7\") " pod="kube-system/kube-proxy-jr2gw" Dec 13 02:07:48.436114 kubelet[1421]: I1213 02:07:48.436010 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-bpf-maps\") pod \"cilium-kv6tr\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " pod="kube-system/cilium-kv6tr" Dec 13 02:07:48.436114 kubelet[1421]: I1213 02:07:48.436021 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-hostproc\") pod \"cilium-kv6tr\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " pod="kube-system/cilium-kv6tr" Dec 13 02:07:48.436114 kubelet[1421]: I1213 02:07:48.436032 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-etc-cni-netd\") pod \"cilium-kv6tr\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " pod="kube-system/cilium-kv6tr" Dec 13 02:07:48.436114 kubelet[1421]: I1213 02:07:48.436044 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-lib-modules\") pod \"cilium-kv6tr\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " pod="kube-system/cilium-kv6tr" Dec 13 02:07:48.436279 kubelet[1421]: I1213 02:07:48.436083 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-xtables-lock\") pod \"cilium-kv6tr\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " pod="kube-system/cilium-kv6tr" Dec 13 02:07:48.436279 kubelet[1421]: I1213 02:07:48.436097 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/64a69962-a54c-47c8-9317-23f7ce013b1e-clustermesh-secrets\") pod \"cilium-kv6tr\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " pod="kube-system/cilium-kv6tr" Dec 13 02:07:48.436279 kubelet[1421]: I1213 02:07:48.436110 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/64a69962-a54c-47c8-9317-23f7ce013b1e-hubble-tls\") pod \"cilium-kv6tr\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " pod="kube-system/cilium-kv6tr" Dec 13 02:07:48.436279 kubelet[1421]: I1213 02:07:48.436134 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-cilium-run\") pod \"cilium-kv6tr\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " pod="kube-system/cilium-kv6tr" Dec 13 02:07:48.436279 kubelet[1421]: I1213 02:07:48.436155 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1d9b9ed7-0d7c-4b4d-9fa8-3cd32515a0a7-kube-proxy\") pod \"kube-proxy-jr2gw\" (UID: \"1d9b9ed7-0d7c-4b4d-9fa8-3cd32515a0a7\") " pod="kube-system/kube-proxy-jr2gw" Dec 13 02:07:48.436331 systemd[1]: Created slice kubepods-burstable-pod64a69962_a54c_47c8_9317_23f7ce013b1e.slice. Dec 13 02:07:48.737732 kubelet[1421]: E1213 02:07:48.737036 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:07:48.737895 env[1205]: time="2024-12-13T02:07:48.737697678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jr2gw,Uid:1d9b9ed7-0d7c-4b4d-9fa8-3cd32515a0a7,Namespace:kube-system,Attempt:0,}" Dec 13 02:07:48.745877 kubelet[1421]: E1213 02:07:48.745846 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:07:48.746334 env[1205]: time="2024-12-13T02:07:48.746288549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kv6tr,Uid:64a69962-a54c-47c8-9317-23f7ce013b1e,Namespace:kube-system,Attempt:0,}" Dec 13 02:07:49.415639 kubelet[1421]: E1213 02:07:49.415583 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:07:49.465420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1839158118.mount: Deactivated successfully. Dec 13 02:07:49.471125 env[1205]: time="2024-12-13T02:07:49.471080622Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:07:49.473681 env[1205]: time="2024-12-13T02:07:49.473644441Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:07:49.475192 env[1205]: time="2024-12-13T02:07:49.475131670Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:07:49.476205 env[1205]: time="2024-12-13T02:07:49.476180436Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:07:49.477491 env[1205]: time="2024-12-13T02:07:49.477442173Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:07:49.479398 env[1205]: time="2024-12-13T02:07:49.479374116Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:07:49.480918 env[1205]: time="2024-12-13T02:07:49.480879639Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:07:49.486388 env[1205]: time="2024-12-13T02:07:49.486356600Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:07:49.506338 env[1205]: time="2024-12-13T02:07:49.506270115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:07:49.506489 env[1205]: time="2024-12-13T02:07:49.506349113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:07:49.506489 env[1205]: time="2024-12-13T02:07:49.506383268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:07:49.506667 env[1205]: time="2024-12-13T02:07:49.506598561Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459 pid=1477 runtime=io.containerd.runc.v2 Dec 13 02:07:49.507795 env[1205]: time="2024-12-13T02:07:49.506743263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:07:49.507795 env[1205]: time="2024-12-13T02:07:49.506764643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:07:49.507878 env[1205]: time="2024-12-13T02:07:49.507776400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:07:49.508060 env[1205]: time="2024-12-13T02:07:49.507997725Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d54d21bab558bb6a281087229423298dcbf6c42990a3d6e19cf9efdd4933219b pid=1489 runtime=io.containerd.runc.v2 Dec 13 02:07:49.533555 systemd[1]: Started cri-containerd-d54d21bab558bb6a281087229423298dcbf6c42990a3d6e19cf9efdd4933219b.scope. Dec 13 02:07:49.584941 systemd[1]: Started cri-containerd-685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459.scope. Dec 13 02:07:49.630186 env[1205]: time="2024-12-13T02:07:49.630134445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jr2gw,Uid:1d9b9ed7-0d7c-4b4d-9fa8-3cd32515a0a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d54d21bab558bb6a281087229423298dcbf6c42990a3d6e19cf9efdd4933219b\"" Dec 13 02:07:49.631309 kubelet[1421]: E1213 02:07:49.631278 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:07:49.632333 env[1205]: time="2024-12-13T02:07:49.632280049Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 02:07:49.637142 env[1205]: time="2024-12-13T02:07:49.637093977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kv6tr,Uid:64a69962-a54c-47c8-9317-23f7ce013b1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459\"" Dec 13 02:07:49.638046 kubelet[1421]: E1213 02:07:49.638017 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:07:50.416399 kubelet[1421]: E1213 02:07:50.416344 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:07:51.149435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2864376981.mount: Deactivated successfully. Dec 13 02:07:51.416886 kubelet[1421]: E1213 02:07:51.416784 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:07:51.872291 env[1205]: time="2024-12-13T02:07:51.872148822Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:07:51.874323 env[1205]: time="2024-12-13T02:07:51.874295989Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:07:51.875923 env[1205]: time="2024-12-13T02:07:51.875881422Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:07:51.877106 env[1205]: time="2024-12-13T02:07:51.877082555Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:07:51.877537 env[1205]: time="2024-12-13T02:07:51.877503344Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 02:07:51.878539 env[1205]: time="2024-12-13T02:07:51.878515793Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:07:51.879832 env[1205]: time="2024-12-13T02:07:51.879802917Z" level=info msg="CreateContainer within sandbox \"d54d21bab558bb6a281087229423298dcbf6c42990a3d6e19cf9efdd4933219b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:07:51.895506 env[1205]: time="2024-12-13T02:07:51.895465149Z" level=info msg="CreateContainer within sandbox \"d54d21bab558bb6a281087229423298dcbf6c42990a3d6e19cf9efdd4933219b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7cb2095b3bc7317f3e4e1c291c57d0a8eec61dbc08e176f2e19f01cbbeb591cd\"" Dec 13 02:07:51.896325 env[1205]: time="2024-12-13T02:07:51.896279726Z" level=info msg="StartContainer for \"7cb2095b3bc7317f3e4e1c291c57d0a8eec61dbc08e176f2e19f01cbbeb591cd\"" Dec 13 02:07:51.911061 systemd[1]: Started cri-containerd-7cb2095b3bc7317f3e4e1c291c57d0a8eec61dbc08e176f2e19f01cbbeb591cd.scope. Dec 13 02:07:51.991646 env[1205]: time="2024-12-13T02:07:51.991588925Z" level=info msg="StartContainer for \"7cb2095b3bc7317f3e4e1c291c57d0a8eec61dbc08e176f2e19f01cbbeb591cd\" returns successfully" Dec 13 02:07:52.418081 kubelet[1421]: E1213 02:07:52.418031 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:07:52.781610 kubelet[1421]: E1213 02:07:52.779442 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:07:52.788692 kubelet[1421]: I1213 02:07:52.788646 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jr2gw" podStartSLOduration=3.542223318 podStartE2EDuration="5.788633829s" podCreationTimestamp="2024-12-13 02:07:47 +0000 UTC" firstStartedPulling="2024-12-13 02:07:49.631943518 +0000 UTC m=+2.546619251" lastFinishedPulling="2024-12-13 02:07:51.878354039 +0000 UTC m=+4.793029762" observedRunningTime="2024-12-13 02:07:52.788184106 +0000 UTC m=+5.702859839" watchObservedRunningTime="2024-12-13 02:07:52.788633829 +0000 UTC m=+5.703309562" Dec 13 02:07:53.418969 kubelet[1421]: E1213 02:07:53.418936 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:07:53.780827 kubelet[1421]: E1213 02:07:53.780719 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:07:54.419506 kubelet[1421]: E1213 02:07:54.419470 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:07:55.419816 kubelet[1421]: E1213 02:07:55.419763 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:07:56.420708 kubelet[1421]: E1213 02:07:56.420665 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:07:57.421528 kubelet[1421]: E1213 02:07:57.421491 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:07:58.127648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount770149618.mount: Deactivated successfully. Dec 13 02:07:58.422582 kubelet[1421]: E1213 02:07:58.422460 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:07:59.423444 kubelet[1421]: E1213 02:07:59.423401 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:00.424598 kubelet[1421]: E1213 02:08:00.424513 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:01.425642 kubelet[1421]: E1213 02:08:01.425600 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:02.426080 kubelet[1421]: E1213 02:08:02.426011 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:02.930173 env[1205]: time="2024-12-13T02:08:02.930110568Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:02.932794 env[1205]: time="2024-12-13T02:08:02.932734889Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:02.934515 env[1205]: time="2024-12-13T02:08:02.934470965Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:02.934999 env[1205]: time="2024-12-13T02:08:02.934967747Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 02:08:02.937142 env[1205]: time="2024-12-13T02:08:02.937116847Z" level=info msg="CreateContainer within sandbox \"685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:08:02.950916 env[1205]: time="2024-12-13T02:08:02.950860451Z" level=info msg="CreateContainer within sandbox \"685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ac6b965f702f804ffdb437eb565b06d4c68264ae3c2d799e07f490e5ddcce645\"" Dec 13 02:08:02.951608 env[1205]: time="2024-12-13T02:08:02.951575582Z" level=info msg="StartContainer for \"ac6b965f702f804ffdb437eb565b06d4c68264ae3c2d799e07f490e5ddcce645\"" Dec 13 02:08:03.035721 systemd[1]: Started cri-containerd-ac6b965f702f804ffdb437eb565b06d4c68264ae3c2d799e07f490e5ddcce645.scope. Dec 13 02:08:03.115906 env[1205]: time="2024-12-13T02:08:03.115863711Z" level=info msg="StartContainer for \"ac6b965f702f804ffdb437eb565b06d4c68264ae3c2d799e07f490e5ddcce645\" returns successfully" Dec 13 02:08:03.125078 systemd[1]: cri-containerd-ac6b965f702f804ffdb437eb565b06d4c68264ae3c2d799e07f490e5ddcce645.scope: Deactivated successfully. Dec 13 02:08:03.366559 env[1205]: time="2024-12-13T02:08:03.366428039Z" level=info msg="shim disconnected" id=ac6b965f702f804ffdb437eb565b06d4c68264ae3c2d799e07f490e5ddcce645 Dec 13 02:08:03.366559 env[1205]: time="2024-12-13T02:08:03.366479095Z" level=warning msg="cleaning up after shim disconnected" id=ac6b965f702f804ffdb437eb565b06d4c68264ae3c2d799e07f490e5ddcce645 namespace=k8s.io Dec 13 02:08:03.366559 env[1205]: time="2024-12-13T02:08:03.366488132Z" level=info msg="cleaning up dead shim" Dec 13 02:08:03.372647 env[1205]: time="2024-12-13T02:08:03.372589965Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:08:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1757 runtime=io.containerd.runc.v2\n" Dec 13 02:08:03.426706 kubelet[1421]: E1213 02:08:03.426653 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:03.795566 kubelet[1421]: E1213 02:08:03.795442 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:03.796881 env[1205]: time="2024-12-13T02:08:03.796841788Z" level=info msg="CreateContainer within sandbox \"685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:08:03.813388 env[1205]: time="2024-12-13T02:08:03.813308740Z" level=info msg="CreateContainer within sandbox \"685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5d5b302d0d3c326602cb987c1895aa8c09baa7a2976ac5b02b9082c336eb9009\"" Dec 13 02:08:03.813805 env[1205]: time="2024-12-13T02:08:03.813775224Z" level=info msg="StartContainer for \"5d5b302d0d3c326602cb987c1895aa8c09baa7a2976ac5b02b9082c336eb9009\"" Dec 13 02:08:03.911425 systemd[1]: Started cri-containerd-5d5b302d0d3c326602cb987c1895aa8c09baa7a2976ac5b02b9082c336eb9009.scope. Dec 13 02:08:03.945300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac6b965f702f804ffdb437eb565b06d4c68264ae3c2d799e07f490e5ddcce645-rootfs.mount: Deactivated successfully. Dec 13 02:08:04.103118 env[1205]: time="2024-12-13T02:08:04.102964538Z" level=info msg="StartContainer for \"5d5b302d0d3c326602cb987c1895aa8c09baa7a2976ac5b02b9082c336eb9009\" returns successfully" Dec 13 02:08:04.112077 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:08:04.112282 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:08:04.113379 systemd[1]: Stopping systemd-sysctl.service... Dec 13 02:08:04.114834 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:08:04.116515 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:08:04.119584 systemd[1]: cri-containerd-5d5b302d0d3c326602cb987c1895aa8c09baa7a2976ac5b02b9082c336eb9009.scope: Deactivated successfully. Dec 13 02:08:04.122407 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:08:04.138738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d5b302d0d3c326602cb987c1895aa8c09baa7a2976ac5b02b9082c336eb9009-rootfs.mount: Deactivated successfully. Dec 13 02:08:04.144670 env[1205]: time="2024-12-13T02:08:04.144620988Z" level=info msg="shim disconnected" id=5d5b302d0d3c326602cb987c1895aa8c09baa7a2976ac5b02b9082c336eb9009 Dec 13 02:08:04.144743 env[1205]: time="2024-12-13T02:08:04.144679699Z" level=warning msg="cleaning up after shim disconnected" id=5d5b302d0d3c326602cb987c1895aa8c09baa7a2976ac5b02b9082c336eb9009 namespace=k8s.io Dec 13 02:08:04.144743 env[1205]: time="2024-12-13T02:08:04.144689096Z" level=info msg="cleaning up dead shim" Dec 13 02:08:04.150602 env[1205]: time="2024-12-13T02:08:04.150574063Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:08:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1821 runtime=io.containerd.runc.v2\n" Dec 13 02:08:04.427641 kubelet[1421]: E1213 02:08:04.427522 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:04.800502 kubelet[1421]: E1213 02:08:04.800311 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:04.801848 env[1205]: time="2024-12-13T02:08:04.801812146Z" level=info msg="CreateContainer within sandbox \"685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:08:04.820123 env[1205]: time="2024-12-13T02:08:04.820087319Z" level=info msg="CreateContainer within sandbox \"685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ca196953f4f6ef78995b8906699fc84232f0f185b503827efb29c75c111237cf\"" Dec 13 02:08:04.820563 env[1205]: time="2024-12-13T02:08:04.820537914Z" level=info msg="StartContainer for \"ca196953f4f6ef78995b8906699fc84232f0f185b503827efb29c75c111237cf\"" Dec 13 02:08:04.834587 systemd[1]: Started cri-containerd-ca196953f4f6ef78995b8906699fc84232f0f185b503827efb29c75c111237cf.scope. Dec 13 02:08:04.930637 env[1205]: time="2024-12-13T02:08:04.930506454Z" level=info msg="StartContainer for \"ca196953f4f6ef78995b8906699fc84232f0f185b503827efb29c75c111237cf\" returns successfully" Dec 13 02:08:04.931091 systemd[1]: cri-containerd-ca196953f4f6ef78995b8906699fc84232f0f185b503827efb29c75c111237cf.scope: Deactivated successfully. Dec 13 02:08:04.946754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca196953f4f6ef78995b8906699fc84232f0f185b503827efb29c75c111237cf-rootfs.mount: Deactivated successfully. Dec 13 02:08:04.950234 env[1205]: time="2024-12-13T02:08:04.950179328Z" level=info msg="shim disconnected" id=ca196953f4f6ef78995b8906699fc84232f0f185b503827efb29c75c111237cf Dec 13 02:08:04.950376 env[1205]: time="2024-12-13T02:08:04.950276600Z" level=warning msg="cleaning up after shim disconnected" id=ca196953f4f6ef78995b8906699fc84232f0f185b503827efb29c75c111237cf namespace=k8s.io Dec 13 02:08:04.950376 env[1205]: time="2024-12-13T02:08:04.950287631Z" level=info msg="cleaning up dead shim" Dec 13 02:08:04.956527 env[1205]: time="2024-12-13T02:08:04.956473993Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:08:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1876 runtime=io.containerd.runc.v2\n" Dec 13 02:08:05.427951 kubelet[1421]: E1213 02:08:05.427910 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:05.802736 kubelet[1421]: E1213 02:08:05.802431 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:05.803802 env[1205]: time="2024-12-13T02:08:05.803768021Z" level=info msg="CreateContainer within sandbox \"685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:08:05.820574 env[1205]: time="2024-12-13T02:08:05.820527652Z" level=info msg="CreateContainer within sandbox \"685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8c15e46dba9736c8c20ead3bc2e09da38e91aafc769755b02069d369011a08c9\"" Dec 13 02:08:05.821076 env[1205]: time="2024-12-13T02:08:05.821055722Z" level=info msg="StartContainer for \"8c15e46dba9736c8c20ead3bc2e09da38e91aafc769755b02069d369011a08c9\"" Dec 13 02:08:05.832857 systemd[1]: Started cri-containerd-8c15e46dba9736c8c20ead3bc2e09da38e91aafc769755b02069d369011a08c9.scope. Dec 13 02:08:05.855113 systemd[1]: cri-containerd-8c15e46dba9736c8c20ead3bc2e09da38e91aafc769755b02069d369011a08c9.scope: Deactivated successfully. Dec 13 02:08:05.857402 env[1205]: time="2024-12-13T02:08:05.857371126Z" level=info msg="StartContainer for \"8c15e46dba9736c8c20ead3bc2e09da38e91aafc769755b02069d369011a08c9\" returns successfully" Dec 13 02:08:05.877495 env[1205]: time="2024-12-13T02:08:05.877438099Z" level=info msg="shim disconnected" id=8c15e46dba9736c8c20ead3bc2e09da38e91aafc769755b02069d369011a08c9 Dec 13 02:08:05.877495 env[1205]: time="2024-12-13T02:08:05.877490497Z" level=warning msg="cleaning up after shim disconnected" id=8c15e46dba9736c8c20ead3bc2e09da38e91aafc769755b02069d369011a08c9 namespace=k8s.io Dec 13 02:08:05.877495 env[1205]: time="2024-12-13T02:08:05.877500346Z" level=info msg="cleaning up dead shim" Dec 13 02:08:05.883887 env[1205]: time="2024-12-13T02:08:05.883843131Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:08:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1928 runtime=io.containerd.runc.v2\n" Dec 13 02:08:05.946085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c15e46dba9736c8c20ead3bc2e09da38e91aafc769755b02069d369011a08c9-rootfs.mount: Deactivated successfully. Dec 13 02:08:06.429013 kubelet[1421]: E1213 02:08:06.428942 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:06.805776 kubelet[1421]: E1213 02:08:06.805476 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:06.807543 env[1205]: time="2024-12-13T02:08:06.807495378Z" level=info msg="CreateContainer within sandbox \"685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:08:06.830256 env[1205]: time="2024-12-13T02:08:06.830200799Z" level=info msg="CreateContainer within sandbox \"685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f\"" Dec 13 02:08:06.830723 env[1205]: time="2024-12-13T02:08:06.830686300Z" level=info msg="StartContainer for \"968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f\"" Dec 13 02:08:06.844174 systemd[1]: Started cri-containerd-968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f.scope. Dec 13 02:08:07.062722 env[1205]: time="2024-12-13T02:08:07.062443607Z" level=info msg="StartContainer for \"968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f\" returns successfully" Dec 13 02:08:07.184966 kubelet[1421]: I1213 02:08:07.184910 1421 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:08:07.367251 kernel: Initializing XFRM netlink socket Dec 13 02:08:07.414921 kubelet[1421]: E1213 02:08:07.414882 1421 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:07.429311 kubelet[1421]: E1213 02:08:07.429270 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:07.808904 kubelet[1421]: E1213 02:08:07.808790 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:07.823136 kubelet[1421]: I1213 02:08:07.823074 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kv6tr" podStartSLOduration=7.525843257 podStartE2EDuration="20.823058642s" podCreationTimestamp="2024-12-13 02:07:47 +0000 UTC" firstStartedPulling="2024-12-13 02:07:49.638610461 +0000 UTC m=+2.553286194" lastFinishedPulling="2024-12-13 02:08:02.935825846 +0000 UTC m=+15.850501579" observedRunningTime="2024-12-13 02:08:07.822638775 +0000 UTC m=+20.737314508" watchObservedRunningTime="2024-12-13 02:08:07.823058642 +0000 UTC m=+20.737734375" Dec 13 02:08:08.430396 kubelet[1421]: E1213 02:08:08.430353 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:08.810017 kubelet[1421]: E1213 02:08:08.809903 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:09.093607 systemd-networkd[1039]: cilium_host: Link UP Dec 13 02:08:09.093710 systemd-networkd[1039]: cilium_net: Link UP Dec 13 02:08:09.093823 systemd-networkd[1039]: cilium_net: Gained carrier Dec 13 02:08:09.094824 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 02:08:09.094866 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 02:08:09.094981 systemd-networkd[1039]: cilium_host: Gained carrier Dec 13 02:08:09.159386 systemd-networkd[1039]: cilium_vxlan: Link UP Dec 13 02:08:09.159395 systemd-networkd[1039]: cilium_vxlan: Gained carrier Dec 13 02:08:09.172922 kubelet[1421]: I1213 02:08:09.172867 1421 topology_manager.go:215] "Topology Admit Handler" podUID="0049a197-cedc-4331-9a61-c785af023932" podNamespace="default" podName="nginx-deployment-85f456d6dd-qg8j4" Dec 13 02:08:09.177603 systemd[1]: Created slice kubepods-besteffort-pod0049a197_cedc_4331_9a61_c785af023932.slice. Dec 13 02:08:09.352317 kubelet[1421]: I1213 02:08:09.352161 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mflc\" (UniqueName: \"kubernetes.io/projected/0049a197-cedc-4331-9a61-c785af023932-kube-api-access-4mflc\") pod \"nginx-deployment-85f456d6dd-qg8j4\" (UID: \"0049a197-cedc-4331-9a61-c785af023932\") " pod="default/nginx-deployment-85f456d6dd-qg8j4" Dec 13 02:08:09.430818 kubelet[1421]: E1213 02:08:09.430764 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:09.550423 systemd-networkd[1039]: cilium_host: Gained IPv6LL Dec 13 02:08:09.628265 kernel: NET: Registered PF_ALG protocol family Dec 13 02:08:09.780863 env[1205]: time="2024-12-13T02:08:09.780815670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-qg8j4,Uid:0049a197-cedc-4331-9a61-c785af023932,Namespace:default,Attempt:0,}" Dec 13 02:08:09.811759 kubelet[1421]: E1213 02:08:09.811728 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:09.847345 systemd-networkd[1039]: cilium_net: Gained IPv6LL Dec 13 02:08:10.295350 systemd-networkd[1039]: cilium_vxlan: Gained IPv6LL Dec 13 02:08:10.306130 systemd-networkd[1039]: lxc_health: Link UP Dec 13 02:08:10.313241 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:08:10.313553 systemd-networkd[1039]: lxc_health: Gained carrier Dec 13 02:08:10.431126 kubelet[1421]: E1213 02:08:10.431074 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:10.813404 kubelet[1421]: E1213 02:08:10.813376 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:10.823161 systemd-networkd[1039]: lxcf31d61370b8c: Link UP Dec 13 02:08:10.834333 kernel: eth0: renamed from tmp8ee71 Dec 13 02:08:10.840247 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf31d61370b8c: link becomes ready Dec 13 02:08:10.840124 systemd-networkd[1039]: lxcf31d61370b8c: Gained carrier Dec 13 02:08:11.431662 kubelet[1421]: E1213 02:08:11.431592 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:11.816012 kubelet[1421]: E1213 02:08:11.815695 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:12.226111 systemd-networkd[1039]: lxc_health: Gained IPv6LL Dec 13 02:08:12.432129 kubelet[1421]: E1213 02:08:12.432057 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:12.598365 systemd-networkd[1039]: lxcf31d61370b8c: Gained IPv6LL Dec 13 02:08:12.816995 kubelet[1421]: E1213 02:08:12.816950 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:13.432820 kubelet[1421]: E1213 02:08:13.432785 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:14.184282 env[1205]: time="2024-12-13T02:08:14.184207840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:08:14.184282 env[1205]: time="2024-12-13T02:08:14.184251573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:08:14.184282 env[1205]: time="2024-12-13T02:08:14.184260711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:08:14.184611 env[1205]: time="2024-12-13T02:08:14.184424986Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ee71e5a66270ec2dab998ebb424c499d15775e0a1a51c0192fb600930deb943 pid=2487 runtime=io.containerd.runc.v2 Dec 13 02:08:14.196615 systemd[1]: Started cri-containerd-8ee71e5a66270ec2dab998ebb424c499d15775e0a1a51c0192fb600930deb943.scope. Dec 13 02:08:14.206457 systemd-resolved[1149]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 02:08:14.224975 env[1205]: time="2024-12-13T02:08:14.224938649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-qg8j4,Uid:0049a197-cedc-4331-9a61-c785af023932,Namespace:default,Attempt:0,} returns sandbox id \"8ee71e5a66270ec2dab998ebb424c499d15775e0a1a51c0192fb600930deb943\"" Dec 13 02:08:14.226574 env[1205]: time="2024-12-13T02:08:14.226551623Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 02:08:14.433689 kubelet[1421]: E1213 02:08:14.433629 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:15.433929 kubelet[1421]: E1213 02:08:15.433875 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:16.435087 kubelet[1421]: E1213 02:08:16.435018 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:17.435450 kubelet[1421]: E1213 02:08:17.435395 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:17.725200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount341432233.mount: Deactivated successfully. Dec 13 02:08:18.436409 kubelet[1421]: E1213 02:08:18.436309 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:19.437240 kubelet[1421]: E1213 02:08:19.436954 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:20.437373 kubelet[1421]: E1213 02:08:20.437305 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:20.781574 update_engine[1198]: I1213 02:08:20.781464 1198 update_attempter.cc:509] Updating boot flags... Dec 13 02:08:21.437781 kubelet[1421]: E1213 02:08:21.437739 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:22.438244 kubelet[1421]: E1213 02:08:22.438171 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:23.438306 kubelet[1421]: E1213 02:08:23.438271 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:23.536699 env[1205]: time="2024-12-13T02:08:23.536659387Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:23.729555 env[1205]: time="2024-12-13T02:08:23.729432596Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:23.799688 env[1205]: time="2024-12-13T02:08:23.799651944Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:23.836017 env[1205]: time="2024-12-13T02:08:23.835984841Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:23.836688 env[1205]: time="2024-12-13T02:08:23.836661106Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 02:08:23.838349 env[1205]: time="2024-12-13T02:08:23.838328734Z" level=info msg="CreateContainer within sandbox \"8ee71e5a66270ec2dab998ebb424c499d15775e0a1a51c0192fb600930deb943\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 02:08:24.264817 env[1205]: time="2024-12-13T02:08:24.264762046Z" level=info msg="CreateContainer within sandbox \"8ee71e5a66270ec2dab998ebb424c499d15775e0a1a51c0192fb600930deb943\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"3419af6bdc541bf52195c9cabccaba08df6b914a17f1014b9a3868ced080e793\"" Dec 13 02:08:24.265530 env[1205]: time="2024-12-13T02:08:24.265485990Z" level=info msg="StartContainer for \"3419af6bdc541bf52195c9cabccaba08df6b914a17f1014b9a3868ced080e793\"" Dec 13 02:08:24.280642 systemd[1]: Started cri-containerd-3419af6bdc541bf52195c9cabccaba08df6b914a17f1014b9a3868ced080e793.scope. Dec 13 02:08:24.438453 kubelet[1421]: E1213 02:08:24.438400 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:24.474539 env[1205]: time="2024-12-13T02:08:24.474459377Z" level=info msg="StartContainer for \"3419af6bdc541bf52195c9cabccaba08df6b914a17f1014b9a3868ced080e793\" returns successfully" Dec 13 02:08:24.889858 kubelet[1421]: I1213 02:08:24.889790 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-qg8j4" podStartSLOduration=6.278699095 podStartE2EDuration="15.889773744s" podCreationTimestamp="2024-12-13 02:08:09 +0000 UTC" firstStartedPulling="2024-12-13 02:08:14.226329267 +0000 UTC m=+27.141005001" lastFinishedPulling="2024-12-13 02:08:23.837403917 +0000 UTC m=+36.752079650" observedRunningTime="2024-12-13 02:08:24.889638357 +0000 UTC m=+37.804314090" watchObservedRunningTime="2024-12-13 02:08:24.889773744 +0000 UTC m=+37.804449477" Dec 13 02:08:25.438574 kubelet[1421]: E1213 02:08:25.438517 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:26.439159 kubelet[1421]: E1213 02:08:26.439060 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:27.414275 kubelet[1421]: E1213 02:08:27.414188 1421 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:27.439678 kubelet[1421]: E1213 02:08:27.439601 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:28.440241 kubelet[1421]: E1213 02:08:28.440152 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:29.440817 kubelet[1421]: E1213 02:08:29.440772 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:30.440925 kubelet[1421]: E1213 02:08:30.440871 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:31.441543 kubelet[1421]: E1213 02:08:31.441465 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:32.442359 kubelet[1421]: E1213 02:08:32.442308 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:32.761702 kubelet[1421]: I1213 02:08:32.761658 1421 topology_manager.go:215] "Topology Admit Handler" podUID="3ff97efd-13c8-48d9-9a1f-18b8a8c59a7e" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 02:08:32.765900 systemd[1]: Created slice kubepods-besteffort-pod3ff97efd_13c8_48d9_9a1f_18b8a8c59a7e.slice. Dec 13 02:08:32.894625 kubelet[1421]: I1213 02:08:32.894570 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/3ff97efd-13c8-48d9-9a1f-18b8a8c59a7e-data\") pod \"nfs-server-provisioner-0\" (UID: \"3ff97efd-13c8-48d9-9a1f-18b8a8c59a7e\") " pod="default/nfs-server-provisioner-0" Dec 13 02:08:32.894625 kubelet[1421]: I1213 02:08:32.894612 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm2lf\" (UniqueName: \"kubernetes.io/projected/3ff97efd-13c8-48d9-9a1f-18b8a8c59a7e-kube-api-access-nm2lf\") pod \"nfs-server-provisioner-0\" (UID: \"3ff97efd-13c8-48d9-9a1f-18b8a8c59a7e\") " pod="default/nfs-server-provisioner-0" Dec 13 02:08:33.068894 env[1205]: time="2024-12-13T02:08:33.068729905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3ff97efd-13c8-48d9-9a1f-18b8a8c59a7e,Namespace:default,Attempt:0,}" Dec 13 02:08:33.129797 systemd-networkd[1039]: lxc8edd34efb474: Link UP Dec 13 02:08:33.140259 kernel: eth0: renamed from tmpceb18 Dec 13 02:08:33.150492 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:08:33.150614 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8edd34efb474: link becomes ready Dec 13 02:08:33.151760 systemd-networkd[1039]: lxc8edd34efb474: Gained carrier Dec 13 02:08:33.409462 env[1205]: time="2024-12-13T02:08:33.408800441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:08:33.409650 env[1205]: time="2024-12-13T02:08:33.408903055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:08:33.409650 env[1205]: time="2024-12-13T02:08:33.408977756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:08:33.409650 env[1205]: time="2024-12-13T02:08:33.409350179Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ceb1835fd281164d89075245a8950cf6509f587cb981936fa9d3a81892ea45d4 pid=2634 runtime=io.containerd.runc.v2 Dec 13 02:08:33.429231 systemd[1]: Started cri-containerd-ceb1835fd281164d89075245a8950cf6509f587cb981936fa9d3a81892ea45d4.scope. Dec 13 02:08:33.440573 systemd-resolved[1149]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 02:08:33.442662 kubelet[1421]: E1213 02:08:33.442594 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:33.459644 env[1205]: time="2024-12-13T02:08:33.459600395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3ff97efd-13c8-48d9-9a1f-18b8a8c59a7e,Namespace:default,Attempt:0,} returns sandbox id \"ceb1835fd281164d89075245a8950cf6509f587cb981936fa9d3a81892ea45d4\"" Dec 13 02:08:33.461504 env[1205]: time="2024-12-13T02:08:33.461460236Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 02:08:34.443350 kubelet[1421]: E1213 02:08:34.443300 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:34.870353 systemd-networkd[1039]: lxc8edd34efb474: Gained IPv6LL Dec 13 02:08:35.443948 kubelet[1421]: E1213 02:08:35.443896 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:36.444563 kubelet[1421]: E1213 02:08:36.444503 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:36.554874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount871941229.mount: Deactivated successfully. Dec 13 02:08:37.444706 kubelet[1421]: E1213 02:08:37.444646 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:38.445093 kubelet[1421]: E1213 02:08:38.445049 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:39.180560 env[1205]: time="2024-12-13T02:08:39.180499594Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:39.182810 env[1205]: time="2024-12-13T02:08:39.182778047Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:39.185017 env[1205]: time="2024-12-13T02:08:39.184982160Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:39.186673 env[1205]: time="2024-12-13T02:08:39.186625235Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:39.187301 env[1205]: time="2024-12-13T02:08:39.187265792Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 02:08:39.189562 env[1205]: time="2024-12-13T02:08:39.189520730Z" level=info msg="CreateContainer within sandbox \"ceb1835fd281164d89075245a8950cf6509f587cb981936fa9d3a81892ea45d4\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 02:08:39.201529 env[1205]: time="2024-12-13T02:08:39.201493257Z" level=info msg="CreateContainer within sandbox \"ceb1835fd281164d89075245a8950cf6509f587cb981936fa9d3a81892ea45d4\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"9884cd0941e07d90933b6d6dea8fba24174693ae98b869e6202adf98cdea0edf\"" Dec 13 02:08:39.201852 env[1205]: time="2024-12-13T02:08:39.201828709Z" level=info msg="StartContainer for \"9884cd0941e07d90933b6d6dea8fba24174693ae98b869e6202adf98cdea0edf\"" Dec 13 02:08:39.217771 systemd[1]: Started cri-containerd-9884cd0941e07d90933b6d6dea8fba24174693ae98b869e6202adf98cdea0edf.scope. Dec 13 02:08:39.237722 env[1205]: time="2024-12-13T02:08:39.237675333Z" level=info msg="StartContainer for \"9884cd0941e07d90933b6d6dea8fba24174693ae98b869e6202adf98cdea0edf\" returns successfully" Dec 13 02:08:39.445377 kubelet[1421]: E1213 02:08:39.445262 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:39.874978 kubelet[1421]: I1213 02:08:39.874905 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.147781364 podStartE2EDuration="7.874872674s" podCreationTimestamp="2024-12-13 02:08:32 +0000 UTC" firstStartedPulling="2024-12-13 02:08:33.461142776 +0000 UTC m=+46.375818509" lastFinishedPulling="2024-12-13 02:08:39.188234096 +0000 UTC m=+52.102909819" observedRunningTime="2024-12-13 02:08:39.874242597 +0000 UTC m=+52.788918350" watchObservedRunningTime="2024-12-13 02:08:39.874872674 +0000 UTC m=+52.789548408" Dec 13 02:08:40.446363 kubelet[1421]: E1213 02:08:40.446291 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:41.447375 kubelet[1421]: E1213 02:08:41.447312 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:42.448160 kubelet[1421]: E1213 02:08:42.448108 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:43.449229 kubelet[1421]: E1213 02:08:43.449179 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:44.449434 kubelet[1421]: E1213 02:08:44.449388 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:45.449897 kubelet[1421]: E1213 02:08:45.449851 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:46.450106 kubelet[1421]: E1213 02:08:46.450068 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:47.414342 kubelet[1421]: E1213 02:08:47.414299 1421 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:47.451134 kubelet[1421]: E1213 02:08:47.451102 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:48.451746 kubelet[1421]: E1213 02:08:48.451667 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:49.400829 kubelet[1421]: I1213 02:08:49.400766 1421 topology_manager.go:215] "Topology Admit Handler" podUID="0a25506f-b2e3-4c9a-bb01-f2d2154261ec" podNamespace="default" podName="test-pod-1" Dec 13 02:08:49.404877 systemd[1]: Created slice kubepods-besteffort-pod0a25506f_b2e3_4c9a_bb01_f2d2154261ec.slice. Dec 13 02:08:49.452525 kubelet[1421]: E1213 02:08:49.452484 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:49.570089 kubelet[1421]: I1213 02:08:49.570044 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-291d85a0-8f5f-4ed7-84c8-26a4074cd284\" (UniqueName: \"kubernetes.io/nfs/0a25506f-b2e3-4c9a-bb01-f2d2154261ec-pvc-291d85a0-8f5f-4ed7-84c8-26a4074cd284\") pod \"test-pod-1\" (UID: \"0a25506f-b2e3-4c9a-bb01-f2d2154261ec\") " pod="default/test-pod-1" Dec 13 02:08:49.570089 kubelet[1421]: I1213 02:08:49.570099 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d88rb\" (UniqueName: \"kubernetes.io/projected/0a25506f-b2e3-4c9a-bb01-f2d2154261ec-kube-api-access-d88rb\") pod \"test-pod-1\" (UID: \"0a25506f-b2e3-4c9a-bb01-f2d2154261ec\") " pod="default/test-pod-1" Dec 13 02:08:49.691249 kernel: FS-Cache: Loaded Dec 13 02:08:49.731766 kernel: RPC: Registered named UNIX socket transport module. Dec 13 02:08:49.731823 kernel: RPC: Registered udp transport module. Dec 13 02:08:49.731841 kernel: RPC: Registered tcp transport module. Dec 13 02:08:49.732539 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 02:08:49.789245 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 02:08:49.970865 kernel: NFS: Registering the id_resolver key type Dec 13 02:08:49.971025 kernel: Key type id_resolver registered Dec 13 02:08:49.971047 kernel: Key type id_legacy registered Dec 13 02:08:49.992978 nfsidmap[2755]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 02:08:49.995914 nfsidmap[2758]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 02:08:50.007772 env[1205]: time="2024-12-13T02:08:50.007737602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0a25506f-b2e3-4c9a-bb01-f2d2154261ec,Namespace:default,Attempt:0,}" Dec 13 02:08:50.183563 systemd-networkd[1039]: lxcbbbc7945da57: Link UP Dec 13 02:08:50.192259 kernel: eth0: renamed from tmp67bca Dec 13 02:08:50.200734 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:08:50.200826 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcbbbc7945da57: link becomes ready Dec 13 02:08:50.200851 systemd-networkd[1039]: lxcbbbc7945da57: Gained carrier Dec 13 02:08:50.453163 kubelet[1421]: E1213 02:08:50.453108 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:50.557888 env[1205]: time="2024-12-13T02:08:50.557796543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:08:50.557888 env[1205]: time="2024-12-13T02:08:50.557837630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:08:50.557888 env[1205]: time="2024-12-13T02:08:50.557848230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:08:50.558184 env[1205]: time="2024-12-13T02:08:50.557998301Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/67bca960376d4816e74482e02368d47f07f8c043323ce5e6ef0aca8864432aac pid=2794 runtime=io.containerd.runc.v2 Dec 13 02:08:50.567679 systemd[1]: Started cri-containerd-67bca960376d4816e74482e02368d47f07f8c043323ce5e6ef0aca8864432aac.scope. Dec 13 02:08:50.577522 systemd-resolved[1149]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 02:08:50.597575 env[1205]: time="2024-12-13T02:08:50.597539333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0a25506f-b2e3-4c9a-bb01-f2d2154261ec,Namespace:default,Attempt:0,} returns sandbox id \"67bca960376d4816e74482e02368d47f07f8c043323ce5e6ef0aca8864432aac\"" Dec 13 02:08:50.599657 env[1205]: time="2024-12-13T02:08:50.599628269Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 02:08:50.949863 env[1205]: time="2024-12-13T02:08:50.949801715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:50.951819 env[1205]: time="2024-12-13T02:08:50.951780895Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:50.953466 env[1205]: time="2024-12-13T02:08:50.953435204Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:50.954856 env[1205]: time="2024-12-13T02:08:50.954816711Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:50.955580 env[1205]: time="2024-12-13T02:08:50.955551192Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 02:08:50.957679 env[1205]: time="2024-12-13T02:08:50.957645388Z" level=info msg="CreateContainer within sandbox \"67bca960376d4816e74482e02368d47f07f8c043323ce5e6ef0aca8864432aac\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 02:08:50.971459 env[1205]: time="2024-12-13T02:08:50.971417033Z" level=info msg="CreateContainer within sandbox \"67bca960376d4816e74482e02368d47f07f8c043323ce5e6ef0aca8864432aac\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"7503e8fef67a9e0c294bae7d2781c2ad2f24d1a6d683f2c37edd82f895e5f05f\"" Dec 13 02:08:50.971799 env[1205]: time="2024-12-13T02:08:50.971773693Z" level=info msg="StartContainer for \"7503e8fef67a9e0c294bae7d2781c2ad2f24d1a6d683f2c37edd82f895e5f05f\"" Dec 13 02:08:50.987171 systemd[1]: Started cri-containerd-7503e8fef67a9e0c294bae7d2781c2ad2f24d1a6d683f2c37edd82f895e5f05f.scope. Dec 13 02:08:51.008921 env[1205]: time="2024-12-13T02:08:51.008870379Z" level=info msg="StartContainer for \"7503e8fef67a9e0c294bae7d2781c2ad2f24d1a6d683f2c37edd82f895e5f05f\" returns successfully" Dec 13 02:08:51.453446 kubelet[1421]: E1213 02:08:51.453394 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:52.022360 systemd-networkd[1039]: lxcbbbc7945da57: Gained IPv6LL Dec 13 02:08:52.454031 kubelet[1421]: E1213 02:08:52.453921 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:53.454087 kubelet[1421]: E1213 02:08:53.454043 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:54.454415 kubelet[1421]: E1213 02:08:54.454364 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:55.150342 kubelet[1421]: I1213 02:08:55.150262 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=22.79276311 podStartE2EDuration="23.150240163s" podCreationTimestamp="2024-12-13 02:08:32 +0000 UTC" firstStartedPulling="2024-12-13 02:08:50.59901654 +0000 UTC m=+63.513692273" lastFinishedPulling="2024-12-13 02:08:50.956493593 +0000 UTC m=+63.871169326" observedRunningTime="2024-12-13 02:08:51.892923811 +0000 UTC m=+64.807599564" watchObservedRunningTime="2024-12-13 02:08:55.150240163 +0000 UTC m=+68.064915896" Dec 13 02:08:55.177264 env[1205]: time="2024-12-13T02:08:55.177191720Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:08:55.183704 env[1205]: time="2024-12-13T02:08:55.183663707Z" level=info msg="StopContainer for \"968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f\" with timeout 2 (s)" Dec 13 02:08:55.184002 env[1205]: time="2024-12-13T02:08:55.183961126Z" level=info msg="Stop container \"968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f\" with signal terminated" Dec 13 02:08:55.190528 systemd-networkd[1039]: lxc_health: Link DOWN Dec 13 02:08:55.190539 systemd-networkd[1039]: lxc_health: Lost carrier Dec 13 02:08:55.225737 systemd[1]: cri-containerd-968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f.scope: Deactivated successfully. Dec 13 02:08:55.226054 systemd[1]: cri-containerd-968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f.scope: Consumed 6.831s CPU time. Dec 13 02:08:55.242603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f-rootfs.mount: Deactivated successfully. Dec 13 02:08:55.454631 kubelet[1421]: E1213 02:08:55.454483 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:55.495824 env[1205]: time="2024-12-13T02:08:55.495768776Z" level=info msg="shim disconnected" id=968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f Dec 13 02:08:55.495824 env[1205]: time="2024-12-13T02:08:55.495814792Z" level=warning msg="cleaning up after shim disconnected" id=968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f namespace=k8s.io Dec 13 02:08:55.495824 env[1205]: time="2024-12-13T02:08:55.495823598Z" level=info msg="cleaning up dead shim" Dec 13 02:08:55.501626 env[1205]: time="2024-12-13T02:08:55.501574551Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:08:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2926 runtime=io.containerd.runc.v2\n" Dec 13 02:08:55.608614 env[1205]: time="2024-12-13T02:08:55.608526702Z" level=info msg="StopContainer for \"968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f\" returns successfully" Dec 13 02:08:55.609200 env[1205]: time="2024-12-13T02:08:55.609157067Z" level=info msg="StopPodSandbox for \"685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459\"" Dec 13 02:08:55.609287 env[1205]: time="2024-12-13T02:08:55.609243308Z" level=info msg="Container to stop \"ca196953f4f6ef78995b8906699fc84232f0f185b503827efb29c75c111237cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:08:55.609287 env[1205]: time="2024-12-13T02:08:55.609259790Z" level=info msg="Container to stop \"ac6b965f702f804ffdb437eb565b06d4c68264ae3c2d799e07f490e5ddcce645\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:08:55.609287 env[1205]: time="2024-12-13T02:08:55.609270580Z" level=info msg="Container to stop \"5d5b302d0d3c326602cb987c1895aa8c09baa7a2976ac5b02b9082c336eb9009\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:08:55.609287 env[1205]: time="2024-12-13T02:08:55.609281010Z" level=info msg="Container to stop \"8c15e46dba9736c8c20ead3bc2e09da38e91aafc769755b02069d369011a08c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:08:55.609436 env[1205]: time="2024-12-13T02:08:55.609291810Z" level=info msg="Container to stop \"968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:08:55.611008 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459-shm.mount: Deactivated successfully. Dec 13 02:08:55.615516 systemd[1]: cri-containerd-685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459.scope: Deactivated successfully. Dec 13 02:08:55.634314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459-rootfs.mount: Deactivated successfully. Dec 13 02:08:55.638496 env[1205]: time="2024-12-13T02:08:55.638433591Z" level=info msg="shim disconnected" id=685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459 Dec 13 02:08:55.638496 env[1205]: time="2024-12-13T02:08:55.638481361Z" level=warning msg="cleaning up after shim disconnected" id=685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459 namespace=k8s.io Dec 13 02:08:55.638496 env[1205]: time="2024-12-13T02:08:55.638491700Z" level=info msg="cleaning up dead shim" Dec 13 02:08:55.644720 env[1205]: time="2024-12-13T02:08:55.644679063Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:08:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2956 runtime=io.containerd.runc.v2\n" Dec 13 02:08:55.644979 env[1205]: time="2024-12-13T02:08:55.644940805Z" level=info msg="TearDown network for sandbox \"685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459\" successfully" Dec 13 02:08:55.645032 env[1205]: time="2024-12-13T02:08:55.644964239Z" level=info msg="StopPodSandbox for \"685ffc9cdff269c367e2a24dce930e21b1948c0e7815714bc398358dd42b4459\" returns successfully" Dec 13 02:08:55.805545 kubelet[1421]: I1213 02:08:55.805470 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-host-proc-sys-kernel\") pod \"64a69962-a54c-47c8-9317-23f7ce013b1e\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " Dec 13 02:08:55.805545 kubelet[1421]: I1213 02:08:55.805521 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64a69962-a54c-47c8-9317-23f7ce013b1e-cilium-config-path\") pod \"64a69962-a54c-47c8-9317-23f7ce013b1e\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " Dec 13 02:08:55.805545 kubelet[1421]: I1213 02:08:55.805539 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnn9d\" (UniqueName: \"kubernetes.io/projected/64a69962-a54c-47c8-9317-23f7ce013b1e-kube-api-access-hnn9d\") pod \"64a69962-a54c-47c8-9317-23f7ce013b1e\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " Dec 13 02:08:55.805545 kubelet[1421]: I1213 02:08:55.805553 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-hostproc\") pod \"64a69962-a54c-47c8-9317-23f7ce013b1e\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " Dec 13 02:08:55.805799 kubelet[1421]: I1213 02:08:55.805568 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-etc-cni-netd\") pod \"64a69962-a54c-47c8-9317-23f7ce013b1e\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " Dec 13 02:08:55.805799 kubelet[1421]: I1213 02:08:55.805580 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-xtables-lock\") pod \"64a69962-a54c-47c8-9317-23f7ce013b1e\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " Dec 13 02:08:55.805799 kubelet[1421]: I1213 02:08:55.805594 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/64a69962-a54c-47c8-9317-23f7ce013b1e-clustermesh-secrets\") pod \"64a69962-a54c-47c8-9317-23f7ce013b1e\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " Dec 13 02:08:55.805799 kubelet[1421]: I1213 02:08:55.805607 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/64a69962-a54c-47c8-9317-23f7ce013b1e-hubble-tls\") pod \"64a69962-a54c-47c8-9317-23f7ce013b1e\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " Dec 13 02:08:55.805799 kubelet[1421]: I1213 02:08:55.805619 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-cilium-run\") pod \"64a69962-a54c-47c8-9317-23f7ce013b1e\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " Dec 13 02:08:55.805799 kubelet[1421]: I1213 02:08:55.805632 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-lib-modules\") pod \"64a69962-a54c-47c8-9317-23f7ce013b1e\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " Dec 13 02:08:55.805935 kubelet[1421]: I1213 02:08:55.805616 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "64a69962-a54c-47c8-9317-23f7ce013b1e" (UID: "64a69962-a54c-47c8-9317-23f7ce013b1e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:08:55.805935 kubelet[1421]: I1213 02:08:55.805663 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-host-proc-sys-net\") pod \"64a69962-a54c-47c8-9317-23f7ce013b1e\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " Dec 13 02:08:55.805935 kubelet[1421]: I1213 02:08:55.805676 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-cilium-cgroup\") pod \"64a69962-a54c-47c8-9317-23f7ce013b1e\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " Dec 13 02:08:55.805935 kubelet[1421]: I1213 02:08:55.805688 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-bpf-maps\") pod \"64a69962-a54c-47c8-9317-23f7ce013b1e\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " Dec 13 02:08:55.805935 kubelet[1421]: I1213 02:08:55.805706 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-cni-path\") pod \"64a69962-a54c-47c8-9317-23f7ce013b1e\" (UID: \"64a69962-a54c-47c8-9317-23f7ce013b1e\") " Dec 13 02:08:55.805935 kubelet[1421]: I1213 02:08:55.805733 1421 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-host-proc-sys-kernel\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:55.806273 kubelet[1421]: I1213 02:08:55.805753 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-cni-path" (OuterVolumeSpecName: "cni-path") pod "64a69962-a54c-47c8-9317-23f7ce013b1e" (UID: "64a69962-a54c-47c8-9317-23f7ce013b1e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:08:55.806273 kubelet[1421]: I1213 02:08:55.806123 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-hostproc" (OuterVolumeSpecName: "hostproc") pod "64a69962-a54c-47c8-9317-23f7ce013b1e" (UID: "64a69962-a54c-47c8-9317-23f7ce013b1e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:08:55.806619 kubelet[1421]: I1213 02:08:55.806594 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "64a69962-a54c-47c8-9317-23f7ce013b1e" (UID: "64a69962-a54c-47c8-9317-23f7ce013b1e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:08:55.806619 kubelet[1421]: I1213 02:08:55.806605 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "64a69962-a54c-47c8-9317-23f7ce013b1e" (UID: "64a69962-a54c-47c8-9317-23f7ce013b1e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:08:55.806692 kubelet[1421]: I1213 02:08:55.806621 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "64a69962-a54c-47c8-9317-23f7ce013b1e" (UID: "64a69962-a54c-47c8-9317-23f7ce013b1e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:08:55.806692 kubelet[1421]: I1213 02:08:55.806617 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "64a69962-a54c-47c8-9317-23f7ce013b1e" (UID: "64a69962-a54c-47c8-9317-23f7ce013b1e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:08:55.806692 kubelet[1421]: I1213 02:08:55.806627 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "64a69962-a54c-47c8-9317-23f7ce013b1e" (UID: "64a69962-a54c-47c8-9317-23f7ce013b1e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:08:55.806692 kubelet[1421]: I1213 02:08:55.806635 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "64a69962-a54c-47c8-9317-23f7ce013b1e" (UID: "64a69962-a54c-47c8-9317-23f7ce013b1e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:08:55.806692 kubelet[1421]: I1213 02:08:55.806648 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "64a69962-a54c-47c8-9317-23f7ce013b1e" (UID: "64a69962-a54c-47c8-9317-23f7ce013b1e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:08:55.807472 kubelet[1421]: I1213 02:08:55.807445 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64a69962-a54c-47c8-9317-23f7ce013b1e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "64a69962-a54c-47c8-9317-23f7ce013b1e" (UID: "64a69962-a54c-47c8-9317-23f7ce013b1e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:08:55.808847 kubelet[1421]: I1213 02:08:55.808821 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64a69962-a54c-47c8-9317-23f7ce013b1e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "64a69962-a54c-47c8-9317-23f7ce013b1e" (UID: "64a69962-a54c-47c8-9317-23f7ce013b1e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:08:55.809112 kubelet[1421]: I1213 02:08:55.809079 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a69962-a54c-47c8-9317-23f7ce013b1e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "64a69962-a54c-47c8-9317-23f7ce013b1e" (UID: "64a69962-a54c-47c8-9317-23f7ce013b1e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:08:55.809177 kubelet[1421]: I1213 02:08:55.809133 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a69962-a54c-47c8-9317-23f7ce013b1e-kube-api-access-hnn9d" (OuterVolumeSpecName: "kube-api-access-hnn9d") pod "64a69962-a54c-47c8-9317-23f7ce013b1e" (UID: "64a69962-a54c-47c8-9317-23f7ce013b1e"). InnerVolumeSpecName "kube-api-access-hnn9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:08:55.810593 systemd[1]: var-lib-kubelet-pods-64a69962\x2da54c\x2d47c8\x2d9317\x2d23f7ce013b1e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhnn9d.mount: Deactivated successfully. Dec 13 02:08:55.810686 systemd[1]: var-lib-kubelet-pods-64a69962\x2da54c\x2d47c8\x2d9317\x2d23f7ce013b1e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:08:55.810735 systemd[1]: var-lib-kubelet-pods-64a69962\x2da54c\x2d47c8\x2d9317\x2d23f7ce013b1e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:08:55.893031 kubelet[1421]: I1213 02:08:55.892993 1421 scope.go:117] "RemoveContainer" containerID="968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f" Dec 13 02:08:55.894146 env[1205]: time="2024-12-13T02:08:55.894107487Z" level=info msg="RemoveContainer for \"968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f\"" Dec 13 02:08:55.896167 systemd[1]: Removed slice kubepods-burstable-pod64a69962_a54c_47c8_9317_23f7ce013b1e.slice. Dec 13 02:08:55.896250 systemd[1]: kubepods-burstable-pod64a69962_a54c_47c8_9317_23f7ce013b1e.slice: Consumed 7.248s CPU time. Dec 13 02:08:55.897798 env[1205]: time="2024-12-13T02:08:55.897766580Z" level=info msg="RemoveContainer for \"968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f\" returns successfully" Dec 13 02:08:55.897980 kubelet[1421]: I1213 02:08:55.897951 1421 scope.go:117] "RemoveContainer" containerID="8c15e46dba9736c8c20ead3bc2e09da38e91aafc769755b02069d369011a08c9" Dec 13 02:08:55.899060 env[1205]: time="2024-12-13T02:08:55.899031496Z" level=info msg="RemoveContainer for \"8c15e46dba9736c8c20ead3bc2e09da38e91aafc769755b02069d369011a08c9\"" Dec 13 02:08:55.901772 env[1205]: time="2024-12-13T02:08:55.901738721Z" level=info msg="RemoveContainer for \"8c15e46dba9736c8c20ead3bc2e09da38e91aafc769755b02069d369011a08c9\" returns successfully" Dec 13 02:08:55.901882 kubelet[1421]: I1213 02:08:55.901842 1421 scope.go:117] "RemoveContainer" containerID="ca196953f4f6ef78995b8906699fc84232f0f185b503827efb29c75c111237cf" Dec 13 02:08:55.902544 env[1205]: time="2024-12-13T02:08:55.902521722Z" level=info msg="RemoveContainer for \"ca196953f4f6ef78995b8906699fc84232f0f185b503827efb29c75c111237cf\"" Dec 13 02:08:55.905261 env[1205]: time="2024-12-13T02:08:55.905233536Z" level=info msg="RemoveContainer for \"ca196953f4f6ef78995b8906699fc84232f0f185b503827efb29c75c111237cf\" returns successfully" Dec 13 02:08:55.905398 kubelet[1421]: I1213 02:08:55.905372 1421 scope.go:117] "RemoveContainer" containerID="5d5b302d0d3c326602cb987c1895aa8c09baa7a2976ac5b02b9082c336eb9009" Dec 13 02:08:55.905830 kubelet[1421]: I1213 02:08:55.905812 1421 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-hostproc\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:55.905830 kubelet[1421]: I1213 02:08:55.905827 1421 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-etc-cni-netd\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:55.905939 kubelet[1421]: I1213 02:08:55.905835 1421 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-xtables-lock\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:55.905939 kubelet[1421]: I1213 02:08:55.905845 1421 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64a69962-a54c-47c8-9317-23f7ce013b1e-cilium-config-path\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:55.905939 kubelet[1421]: I1213 02:08:55.905852 1421 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hnn9d\" (UniqueName: \"kubernetes.io/projected/64a69962-a54c-47c8-9317-23f7ce013b1e-kube-api-access-hnn9d\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:55.905939 kubelet[1421]: I1213 02:08:55.905861 1421 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/64a69962-a54c-47c8-9317-23f7ce013b1e-clustermesh-secrets\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:55.905939 kubelet[1421]: I1213 02:08:55.905868 1421 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/64a69962-a54c-47c8-9317-23f7ce013b1e-hubble-tls\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:55.905939 kubelet[1421]: I1213 02:08:55.905875 1421 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-cilium-run\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:55.905939 kubelet[1421]: I1213 02:08:55.905881 1421 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-lib-modules\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:55.905939 kubelet[1421]: I1213 02:08:55.905888 1421 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-host-proc-sys-net\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:55.906169 kubelet[1421]: I1213 02:08:55.905894 1421 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-cilium-cgroup\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:55.906169 kubelet[1421]: I1213 02:08:55.905901 1421 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-bpf-maps\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:55.906169 kubelet[1421]: I1213 02:08:55.905907 1421 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/64a69962-a54c-47c8-9317-23f7ce013b1e-cni-path\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:55.906591 env[1205]: time="2024-12-13T02:08:55.906568203Z" level=info msg="RemoveContainer for \"5d5b302d0d3c326602cb987c1895aa8c09baa7a2976ac5b02b9082c336eb9009\"" Dec 13 02:08:55.908942 env[1205]: time="2024-12-13T02:08:55.908908449Z" level=info msg="RemoveContainer for \"5d5b302d0d3c326602cb987c1895aa8c09baa7a2976ac5b02b9082c336eb9009\" returns successfully" Dec 13 02:08:55.909056 kubelet[1421]: I1213 02:08:55.909033 1421 scope.go:117] "RemoveContainer" containerID="ac6b965f702f804ffdb437eb565b06d4c68264ae3c2d799e07f490e5ddcce645" Dec 13 02:08:55.909771 env[1205]: time="2024-12-13T02:08:55.909747686Z" level=info msg="RemoveContainer for \"ac6b965f702f804ffdb437eb565b06d4c68264ae3c2d799e07f490e5ddcce645\"" Dec 13 02:08:55.912007 env[1205]: time="2024-12-13T02:08:55.911966634Z" level=info msg="RemoveContainer for \"ac6b965f702f804ffdb437eb565b06d4c68264ae3c2d799e07f490e5ddcce645\" returns successfully" Dec 13 02:08:55.912091 kubelet[1421]: I1213 02:08:55.912065 1421 scope.go:117] "RemoveContainer" containerID="968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f" Dec 13 02:08:55.912296 env[1205]: time="2024-12-13T02:08:55.912226252Z" level=error msg="ContainerStatus for \"968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f\": not found" Dec 13 02:08:55.912454 kubelet[1421]: E1213 02:08:55.912409 1421 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f\": not found" containerID="968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f" Dec 13 02:08:55.912540 kubelet[1421]: I1213 02:08:55.912434 1421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f"} err="failed to get container status \"968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f\": rpc error: code = NotFound desc = an error occurred when try to find container \"968e4426e668dc52458d2d2f91f15df0aef432c604305ff6b6977dd287c8a46f\": not found" Dec 13 02:08:55.912540 kubelet[1421]: I1213 02:08:55.912508 1421 scope.go:117] "RemoveContainer" containerID="8c15e46dba9736c8c20ead3bc2e09da38e91aafc769755b02069d369011a08c9" Dec 13 02:08:55.912704 env[1205]: time="2024-12-13T02:08:55.912640059Z" level=error msg="ContainerStatus for \"8c15e46dba9736c8c20ead3bc2e09da38e91aafc769755b02069d369011a08c9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c15e46dba9736c8c20ead3bc2e09da38e91aafc769755b02069d369011a08c9\": not found" Dec 13 02:08:55.912763 kubelet[1421]: E1213 02:08:55.912743 1421 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c15e46dba9736c8c20ead3bc2e09da38e91aafc769755b02069d369011a08c9\": not found" containerID="8c15e46dba9736c8c20ead3bc2e09da38e91aafc769755b02069d369011a08c9" Dec 13 02:08:55.912792 kubelet[1421]: I1213 02:08:55.912759 1421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c15e46dba9736c8c20ead3bc2e09da38e91aafc769755b02069d369011a08c9"} err="failed to get container status \"8c15e46dba9736c8c20ead3bc2e09da38e91aafc769755b02069d369011a08c9\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c15e46dba9736c8c20ead3bc2e09da38e91aafc769755b02069d369011a08c9\": not found" Dec 13 02:08:55.912792 kubelet[1421]: I1213 02:08:55.912772 1421 scope.go:117] "RemoveContainer" containerID="ca196953f4f6ef78995b8906699fc84232f0f185b503827efb29c75c111237cf" Dec 13 02:08:55.912920 env[1205]: time="2024-12-13T02:08:55.912885851Z" level=error msg="ContainerStatus for \"ca196953f4f6ef78995b8906699fc84232f0f185b503827efb29c75c111237cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca196953f4f6ef78995b8906699fc84232f0f185b503827efb29c75c111237cf\": not found" Dec 13 02:08:55.913008 kubelet[1421]: E1213 02:08:55.912975 1421 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca196953f4f6ef78995b8906699fc84232f0f185b503827efb29c75c111237cf\": not found" containerID="ca196953f4f6ef78995b8906699fc84232f0f185b503827efb29c75c111237cf" Dec 13 02:08:55.913008 kubelet[1421]: I1213 02:08:55.912988 1421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca196953f4f6ef78995b8906699fc84232f0f185b503827efb29c75c111237cf"} err="failed to get container status \"ca196953f4f6ef78995b8906699fc84232f0f185b503827efb29c75c111237cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca196953f4f6ef78995b8906699fc84232f0f185b503827efb29c75c111237cf\": not found" Dec 13 02:08:55.913008 kubelet[1421]: I1213 02:08:55.912999 1421 scope.go:117] "RemoveContainer" containerID="5d5b302d0d3c326602cb987c1895aa8c09baa7a2976ac5b02b9082c336eb9009" Dec 13 02:08:55.913185 env[1205]: time="2024-12-13T02:08:55.913129758Z" level=error msg="ContainerStatus for \"5d5b302d0d3c326602cb987c1895aa8c09baa7a2976ac5b02b9082c336eb9009\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d5b302d0d3c326602cb987c1895aa8c09baa7a2976ac5b02b9082c336eb9009\": not found" Dec 13 02:08:55.913262 kubelet[1421]: E1213 02:08:55.913248 1421 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d5b302d0d3c326602cb987c1895aa8c09baa7a2976ac5b02b9082c336eb9009\": not found" containerID="5d5b302d0d3c326602cb987c1895aa8c09baa7a2976ac5b02b9082c336eb9009" Dec 13 02:08:55.913302 kubelet[1421]: I1213 02:08:55.913261 1421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5d5b302d0d3c326602cb987c1895aa8c09baa7a2976ac5b02b9082c336eb9009"} err="failed to get container status \"5d5b302d0d3c326602cb987c1895aa8c09baa7a2976ac5b02b9082c336eb9009\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d5b302d0d3c326602cb987c1895aa8c09baa7a2976ac5b02b9082c336eb9009\": not found" Dec 13 02:08:55.913302 kubelet[1421]: I1213 02:08:55.913271 1421 scope.go:117] "RemoveContainer" containerID="ac6b965f702f804ffdb437eb565b06d4c68264ae3c2d799e07f490e5ddcce645" Dec 13 02:08:55.913409 env[1205]: time="2024-12-13T02:08:55.913373787Z" level=error msg="ContainerStatus for \"ac6b965f702f804ffdb437eb565b06d4c68264ae3c2d799e07f490e5ddcce645\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac6b965f702f804ffdb437eb565b06d4c68264ae3c2d799e07f490e5ddcce645\": not found" Dec 13 02:08:55.913483 kubelet[1421]: E1213 02:08:55.913465 1421 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac6b965f702f804ffdb437eb565b06d4c68264ae3c2d799e07f490e5ddcce645\": not found" containerID="ac6b965f702f804ffdb437eb565b06d4c68264ae3c2d799e07f490e5ddcce645" Dec 13 02:08:55.913539 kubelet[1421]: I1213 02:08:55.913484 1421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac6b965f702f804ffdb437eb565b06d4c68264ae3c2d799e07f490e5ddcce645"} err="failed to get container status \"ac6b965f702f804ffdb437eb565b06d4c68264ae3c2d799e07f490e5ddcce645\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac6b965f702f804ffdb437eb565b06d4c68264ae3c2d799e07f490e5ddcce645\": not found" Dec 13 02:08:56.454793 kubelet[1421]: E1213 02:08:56.454721 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:57.455480 kubelet[1421]: E1213 02:08:57.455414 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:57.536125 kubelet[1421]: I1213 02:08:57.536064 1421 topology_manager.go:215] "Topology Admit Handler" podUID="8491fb77-d293-4f41-94bc-e07abadbeb05" podNamespace="kube-system" podName="cilium-bcnfg" Dec 13 02:08:57.536125 kubelet[1421]: E1213 02:08:57.536132 1421 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64a69962-a54c-47c8-9317-23f7ce013b1e" containerName="mount-bpf-fs" Dec 13 02:08:57.536345 kubelet[1421]: E1213 02:08:57.536145 1421 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64a69962-a54c-47c8-9317-23f7ce013b1e" containerName="clean-cilium-state" Dec 13 02:08:57.536345 kubelet[1421]: E1213 02:08:57.536153 1421 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64a69962-a54c-47c8-9317-23f7ce013b1e" containerName="apply-sysctl-overwrites" Dec 13 02:08:57.536345 kubelet[1421]: E1213 02:08:57.536159 1421 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64a69962-a54c-47c8-9317-23f7ce013b1e" containerName="cilium-agent" Dec 13 02:08:57.536345 kubelet[1421]: E1213 02:08:57.536163 1421 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64a69962-a54c-47c8-9317-23f7ce013b1e" containerName="mount-cgroup" Dec 13 02:08:57.536345 kubelet[1421]: I1213 02:08:57.536185 1421 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a69962-a54c-47c8-9317-23f7ce013b1e" containerName="cilium-agent" Dec 13 02:08:57.536512 kubelet[1421]: I1213 02:08:57.536345 1421 topology_manager.go:215] "Topology Admit Handler" podUID="e01ddcb2-0cac-43f2-b036-c93085de1ebb" podNamespace="kube-system" podName="cilium-operator-599987898-gvbtz" Dec 13 02:08:57.541337 systemd[1]: Created slice kubepods-besteffort-pode01ddcb2_0cac_43f2_b036_c93085de1ebb.slice. Dec 13 02:08:57.545304 systemd[1]: Created slice kubepods-burstable-pod8491fb77_d293_4f41_94bc_e07abadbeb05.slice. Dec 13 02:08:57.678273 kubelet[1421]: E1213 02:08:57.678228 1421 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-rg8mr lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-bcnfg" podUID="8491fb77-d293-4f41-94bc-e07abadbeb05" Dec 13 02:08:57.717090 kubelet[1421]: I1213 02:08:57.716968 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-cni-path\") pod \"cilium-bcnfg\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " pod="kube-system/cilium-bcnfg" Dec 13 02:08:57.717090 kubelet[1421]: I1213 02:08:57.717012 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-lib-modules\") pod \"cilium-bcnfg\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " pod="kube-system/cilium-bcnfg" Dec 13 02:08:57.717090 kubelet[1421]: I1213 02:08:57.717039 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-host-proc-sys-net\") pod \"cilium-bcnfg\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " pod="kube-system/cilium-bcnfg" Dec 13 02:08:57.717090 kubelet[1421]: I1213 02:08:57.717059 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8491fb77-d293-4f41-94bc-e07abadbeb05-clustermesh-secrets\") pod \"cilium-bcnfg\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " pod="kube-system/cilium-bcnfg" Dec 13 02:08:57.717090 kubelet[1421]: I1213 02:08:57.717077 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-host-proc-sys-kernel\") pod \"cilium-bcnfg\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " pod="kube-system/cilium-bcnfg" Dec 13 02:08:57.717370 kubelet[1421]: I1213 02:08:57.717095 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg8mr\" (UniqueName: \"kubernetes.io/projected/8491fb77-d293-4f41-94bc-e07abadbeb05-kube-api-access-rg8mr\") pod \"cilium-bcnfg\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " pod="kube-system/cilium-bcnfg" Dec 13 02:08:57.717370 kubelet[1421]: I1213 02:08:57.717123 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qb9f\" (UniqueName: \"kubernetes.io/projected/e01ddcb2-0cac-43f2-b036-c93085de1ebb-kube-api-access-2qb9f\") pod \"cilium-operator-599987898-gvbtz\" (UID: \"e01ddcb2-0cac-43f2-b036-c93085de1ebb\") " pod="kube-system/cilium-operator-599987898-gvbtz" Dec 13 02:08:57.717370 kubelet[1421]: I1213 02:08:57.717142 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-bpf-maps\") pod \"cilium-bcnfg\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " pod="kube-system/cilium-bcnfg" Dec 13 02:08:57.717370 kubelet[1421]: I1213 02:08:57.717157 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-cilium-cgroup\") pod \"cilium-bcnfg\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " pod="kube-system/cilium-bcnfg" Dec 13 02:08:57.717370 kubelet[1421]: I1213 02:08:57.717173 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-etc-cni-netd\") pod \"cilium-bcnfg\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " pod="kube-system/cilium-bcnfg" Dec 13 02:08:57.717636 kubelet[1421]: I1213 02:08:57.717189 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-xtables-lock\") pod \"cilium-bcnfg\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " pod="kube-system/cilium-bcnfg" Dec 13 02:08:57.717636 kubelet[1421]: I1213 02:08:57.717204 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8491fb77-d293-4f41-94bc-e07abadbeb05-hubble-tls\") pod \"cilium-bcnfg\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " pod="kube-system/cilium-bcnfg" Dec 13 02:08:57.717636 kubelet[1421]: I1213 02:08:57.717251 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e01ddcb2-0cac-43f2-b036-c93085de1ebb-cilium-config-path\") pod \"cilium-operator-599987898-gvbtz\" (UID: \"e01ddcb2-0cac-43f2-b036-c93085de1ebb\") " pod="kube-system/cilium-operator-599987898-gvbtz" Dec 13 02:08:57.717636 kubelet[1421]: I1213 02:08:57.717277 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-cilium-run\") pod \"cilium-bcnfg\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " pod="kube-system/cilium-bcnfg" Dec 13 02:08:57.717636 kubelet[1421]: I1213 02:08:57.717293 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-hostproc\") pod \"cilium-bcnfg\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " pod="kube-system/cilium-bcnfg" Dec 13 02:08:57.717758 kubelet[1421]: I1213 02:08:57.717307 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8491fb77-d293-4f41-94bc-e07abadbeb05-cilium-config-path\") pod \"cilium-bcnfg\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " pod="kube-system/cilium-bcnfg" Dec 13 02:08:57.717758 kubelet[1421]: I1213 02:08:57.717322 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8491fb77-d293-4f41-94bc-e07abadbeb05-cilium-ipsec-secrets\") pod \"cilium-bcnfg\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " pod="kube-system/cilium-bcnfg" Dec 13 02:08:57.757084 kubelet[1421]: E1213 02:08:57.757051 1421 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:08:57.772038 kubelet[1421]: I1213 02:08:57.772002 1421 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64a69962-a54c-47c8-9317-23f7ce013b1e" path="/var/lib/kubelet/pods/64a69962-a54c-47c8-9317-23f7ce013b1e/volumes" Dec 13 02:08:58.020093 kubelet[1421]: I1213 02:08:58.020068 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-lib-modules\") pod \"8491fb77-d293-4f41-94bc-e07abadbeb05\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " Dec 13 02:08:58.020244 kubelet[1421]: I1213 02:08:58.020104 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-cni-path\") pod \"8491fb77-d293-4f41-94bc-e07abadbeb05\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " Dec 13 02:08:58.020244 kubelet[1421]: I1213 02:08:58.020131 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg8mr\" (UniqueName: \"kubernetes.io/projected/8491fb77-d293-4f41-94bc-e07abadbeb05-kube-api-access-rg8mr\") pod \"8491fb77-d293-4f41-94bc-e07abadbeb05\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " Dec 13 02:08:58.020244 kubelet[1421]: I1213 02:08:58.020155 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8491fb77-d293-4f41-94bc-e07abadbeb05-cilium-config-path\") pod \"8491fb77-d293-4f41-94bc-e07abadbeb05\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " Dec 13 02:08:58.020244 kubelet[1421]: I1213 02:08:58.020174 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8491fb77-d293-4f41-94bc-e07abadbeb05-cilium-ipsec-secrets\") pod \"8491fb77-d293-4f41-94bc-e07abadbeb05\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " Dec 13 02:08:58.020244 kubelet[1421]: I1213 02:08:58.020190 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-cilium-cgroup\") pod \"8491fb77-d293-4f41-94bc-e07abadbeb05\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " Dec 13 02:08:58.020244 kubelet[1421]: I1213 02:08:58.020206 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-bpf-maps\") pod \"8491fb77-d293-4f41-94bc-e07abadbeb05\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " Dec 13 02:08:58.020379 kubelet[1421]: I1213 02:08:58.020245 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8491fb77-d293-4f41-94bc-e07abadbeb05-clustermesh-secrets\") pod \"8491fb77-d293-4f41-94bc-e07abadbeb05\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " Dec 13 02:08:58.020379 kubelet[1421]: I1213 02:08:58.020264 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-host-proc-sys-net\") pod \"8491fb77-d293-4f41-94bc-e07abadbeb05\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " Dec 13 02:08:58.020379 kubelet[1421]: I1213 02:08:58.020281 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-hostproc\") pod \"8491fb77-d293-4f41-94bc-e07abadbeb05\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " Dec 13 02:08:58.020379 kubelet[1421]: I1213 02:08:58.020301 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-host-proc-sys-kernel\") pod \"8491fb77-d293-4f41-94bc-e07abadbeb05\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " Dec 13 02:08:58.020379 kubelet[1421]: I1213 02:08:58.020318 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-etc-cni-netd\") pod \"8491fb77-d293-4f41-94bc-e07abadbeb05\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " Dec 13 02:08:58.020379 kubelet[1421]: I1213 02:08:58.020335 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-xtables-lock\") pod \"8491fb77-d293-4f41-94bc-e07abadbeb05\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " Dec 13 02:08:58.020531 kubelet[1421]: I1213 02:08:58.020355 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8491fb77-d293-4f41-94bc-e07abadbeb05-hubble-tls\") pod \"8491fb77-d293-4f41-94bc-e07abadbeb05\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " Dec 13 02:08:58.020531 kubelet[1421]: I1213 02:08:58.020374 1421 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-cilium-run\") pod \"8491fb77-d293-4f41-94bc-e07abadbeb05\" (UID: \"8491fb77-d293-4f41-94bc-e07abadbeb05\") " Dec 13 02:08:58.020531 kubelet[1421]: I1213 02:08:58.020417 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8491fb77-d293-4f41-94bc-e07abadbeb05" (UID: "8491fb77-d293-4f41-94bc-e07abadbeb05"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:08:58.020531 kubelet[1421]: I1213 02:08:58.020491 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8491fb77-d293-4f41-94bc-e07abadbeb05" (UID: "8491fb77-d293-4f41-94bc-e07abadbeb05"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:08:58.020531 kubelet[1421]: I1213 02:08:58.020513 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-cni-path" (OuterVolumeSpecName: "cni-path") pod "8491fb77-d293-4f41-94bc-e07abadbeb05" (UID: "8491fb77-d293-4f41-94bc-e07abadbeb05"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:08:58.020636 kubelet[1421]: I1213 02:08:58.020540 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8491fb77-d293-4f41-94bc-e07abadbeb05" (UID: "8491fb77-d293-4f41-94bc-e07abadbeb05"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:08:58.020890 kubelet[1421]: I1213 02:08:58.020864 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8491fb77-d293-4f41-94bc-e07abadbeb05" (UID: "8491fb77-d293-4f41-94bc-e07abadbeb05"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:08:58.020946 kubelet[1421]: I1213 02:08:58.020900 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-hostproc" (OuterVolumeSpecName: "hostproc") pod "8491fb77-d293-4f41-94bc-e07abadbeb05" (UID: "8491fb77-d293-4f41-94bc-e07abadbeb05"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:08:58.020946 kubelet[1421]: I1213 02:08:58.020926 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8491fb77-d293-4f41-94bc-e07abadbeb05" (UID: "8491fb77-d293-4f41-94bc-e07abadbeb05"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:08:58.020992 kubelet[1421]: I1213 02:08:58.020947 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8491fb77-d293-4f41-94bc-e07abadbeb05" (UID: "8491fb77-d293-4f41-94bc-e07abadbeb05"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:08:58.020992 kubelet[1421]: I1213 02:08:58.020954 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8491fb77-d293-4f41-94bc-e07abadbeb05" (UID: "8491fb77-d293-4f41-94bc-e07abadbeb05"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:08:58.020992 kubelet[1421]: I1213 02:08:58.020977 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8491fb77-d293-4f41-94bc-e07abadbeb05" (UID: "8491fb77-d293-4f41-94bc-e07abadbeb05"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:08:58.022145 kubelet[1421]: I1213 02:08:58.022121 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8491fb77-d293-4f41-94bc-e07abadbeb05-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8491fb77-d293-4f41-94bc-e07abadbeb05" (UID: "8491fb77-d293-4f41-94bc-e07abadbeb05"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:08:58.023304 kubelet[1421]: I1213 02:08:58.023287 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8491fb77-d293-4f41-94bc-e07abadbeb05-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "8491fb77-d293-4f41-94bc-e07abadbeb05" (UID: "8491fb77-d293-4f41-94bc-e07abadbeb05"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:08:58.023465 kubelet[1421]: I1213 02:08:58.023431 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8491fb77-d293-4f41-94bc-e07abadbeb05-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8491fb77-d293-4f41-94bc-e07abadbeb05" (UID: "8491fb77-d293-4f41-94bc-e07abadbeb05"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:08:58.024449 systemd[1]: var-lib-kubelet-pods-8491fb77\x2dd293\x2d4f41\x2d94bc\x2de07abadbeb05-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drg8mr.mount: Deactivated successfully. Dec 13 02:08:58.024546 systemd[1]: var-lib-kubelet-pods-8491fb77\x2dd293\x2d4f41\x2d94bc\x2de07abadbeb05-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:08:58.024624 systemd[1]: var-lib-kubelet-pods-8491fb77\x2dd293\x2d4f41\x2d94bc\x2de07abadbeb05-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 02:08:58.024930 kubelet[1421]: I1213 02:08:58.024913 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8491fb77-d293-4f41-94bc-e07abadbeb05-kube-api-access-rg8mr" (OuterVolumeSpecName: "kube-api-access-rg8mr") pod "8491fb77-d293-4f41-94bc-e07abadbeb05" (UID: "8491fb77-d293-4f41-94bc-e07abadbeb05"). InnerVolumeSpecName "kube-api-access-rg8mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:08:58.025169 kubelet[1421]: I1213 02:08:58.025141 1421 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8491fb77-d293-4f41-94bc-e07abadbeb05-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8491fb77-d293-4f41-94bc-e07abadbeb05" (UID: "8491fb77-d293-4f41-94bc-e07abadbeb05"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:08:58.121405 kubelet[1421]: I1213 02:08:58.121354 1421 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-cilium-cgroup\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:58.121405 kubelet[1421]: I1213 02:08:58.121393 1421 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-bpf-maps\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:58.121405 kubelet[1421]: I1213 02:08:58.121405 1421 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8491fb77-d293-4f41-94bc-e07abadbeb05-clustermesh-secrets\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:58.121405 kubelet[1421]: I1213 02:08:58.121417 1421 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-host-proc-sys-net\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:58.121693 kubelet[1421]: I1213 02:08:58.121428 1421 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-hostproc\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:58.121693 kubelet[1421]: I1213 02:08:58.121449 1421 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-host-proc-sys-kernel\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:58.121693 kubelet[1421]: I1213 02:08:58.121460 1421 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-etc-cni-netd\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:58.121693 kubelet[1421]: I1213 02:08:58.121469 1421 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-xtables-lock\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:58.121693 kubelet[1421]: I1213 02:08:58.121478 1421 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8491fb77-d293-4f41-94bc-e07abadbeb05-hubble-tls\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:58.121693 kubelet[1421]: I1213 02:08:58.121486 1421 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-cilium-run\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:58.121693 kubelet[1421]: I1213 02:08:58.121498 1421 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-lib-modules\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:58.121693 kubelet[1421]: I1213 02:08:58.121511 1421 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8491fb77-d293-4f41-94bc-e07abadbeb05-cni-path\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:58.121860 kubelet[1421]: I1213 02:08:58.121521 1421 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rg8mr\" (UniqueName: \"kubernetes.io/projected/8491fb77-d293-4f41-94bc-e07abadbeb05-kube-api-access-rg8mr\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:58.121860 kubelet[1421]: I1213 02:08:58.121532 1421 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8491fb77-d293-4f41-94bc-e07abadbeb05-cilium-config-path\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:58.121860 kubelet[1421]: I1213 02:08:58.121542 1421 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8491fb77-d293-4f41-94bc-e07abadbeb05-cilium-ipsec-secrets\") on node \"10.0.0.138\" DevicePath \"\"" Dec 13 02:08:58.143860 kubelet[1421]: E1213 02:08:58.143802 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:58.144393 env[1205]: time="2024-12-13T02:08:58.144354187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gvbtz,Uid:e01ddcb2-0cac-43f2-b036-c93085de1ebb,Namespace:kube-system,Attempt:0,}" Dec 13 02:08:58.156094 env[1205]: time="2024-12-13T02:08:58.156031345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:08:58.156094 env[1205]: time="2024-12-13T02:08:58.156066511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:08:58.156094 env[1205]: time="2024-12-13T02:08:58.156076881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:08:58.156321 env[1205]: time="2024-12-13T02:08:58.156242351Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/40455edd2a85d6135657dd84d34da51ab0b2e7e3e212c48cc098f33ec0dd4a0f pid=2987 runtime=io.containerd.runc.v2 Dec 13 02:08:58.166889 systemd[1]: Started cri-containerd-40455edd2a85d6135657dd84d34da51ab0b2e7e3e212c48cc098f33ec0dd4a0f.scope. Dec 13 02:08:58.194163 env[1205]: time="2024-12-13T02:08:58.194125772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gvbtz,Uid:e01ddcb2-0cac-43f2-b036-c93085de1ebb,Namespace:kube-system,Attempt:0,} returns sandbox id \"40455edd2a85d6135657dd84d34da51ab0b2e7e3e212c48cc098f33ec0dd4a0f\"" Dec 13 02:08:58.194669 kubelet[1421]: E1213 02:08:58.194650 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:58.195581 env[1205]: time="2024-12-13T02:08:58.195557060Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:08:58.456119 kubelet[1421]: E1213 02:08:58.456005 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:58.823460 systemd[1]: var-lib-kubelet-pods-8491fb77\x2dd293\x2d4f41\x2d94bc\x2de07abadbeb05-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:08:58.901403 systemd[1]: Removed slice kubepods-burstable-pod8491fb77_d293_4f41_94bc_e07abadbeb05.slice. Dec 13 02:08:58.925380 kubelet[1421]: I1213 02:08:58.925349 1421 topology_manager.go:215] "Topology Admit Handler" podUID="e196b2ae-8d57-4ed1-8070-b471d18e4526" podNamespace="kube-system" podName="cilium-mf5lk" Dec 13 02:08:58.930009 systemd[1]: Created slice kubepods-burstable-pode196b2ae_8d57_4ed1_8070_b471d18e4526.slice. Dec 13 02:08:59.026396 kubelet[1421]: I1213 02:08:59.026363 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e196b2ae-8d57-4ed1-8070-b471d18e4526-cilium-cgroup\") pod \"cilium-mf5lk\" (UID: \"e196b2ae-8d57-4ed1-8070-b471d18e4526\") " pod="kube-system/cilium-mf5lk" Dec 13 02:08:59.026494 kubelet[1421]: I1213 02:08:59.026400 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e196b2ae-8d57-4ed1-8070-b471d18e4526-clustermesh-secrets\") pod \"cilium-mf5lk\" (UID: \"e196b2ae-8d57-4ed1-8070-b471d18e4526\") " pod="kube-system/cilium-mf5lk" Dec 13 02:08:59.026494 kubelet[1421]: I1213 02:08:59.026441 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9qnr\" (UniqueName: \"kubernetes.io/projected/e196b2ae-8d57-4ed1-8070-b471d18e4526-kube-api-access-p9qnr\") pod \"cilium-mf5lk\" (UID: \"e196b2ae-8d57-4ed1-8070-b471d18e4526\") " pod="kube-system/cilium-mf5lk" Dec 13 02:08:59.026494 kubelet[1421]: I1213 02:08:59.026461 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e196b2ae-8d57-4ed1-8070-b471d18e4526-cilium-run\") pod \"cilium-mf5lk\" (UID: \"e196b2ae-8d57-4ed1-8070-b471d18e4526\") " pod="kube-system/cilium-mf5lk" Dec 13 02:08:59.026494 kubelet[1421]: I1213 02:08:59.026477 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e196b2ae-8d57-4ed1-8070-b471d18e4526-cni-path\") pod \"cilium-mf5lk\" (UID: \"e196b2ae-8d57-4ed1-8070-b471d18e4526\") " pod="kube-system/cilium-mf5lk" Dec 13 02:08:59.026494 kubelet[1421]: I1213 02:08:59.026493 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e196b2ae-8d57-4ed1-8070-b471d18e4526-etc-cni-netd\") pod \"cilium-mf5lk\" (UID: \"e196b2ae-8d57-4ed1-8070-b471d18e4526\") " pod="kube-system/cilium-mf5lk" Dec 13 02:08:59.026680 kubelet[1421]: I1213 02:08:59.026510 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e196b2ae-8d57-4ed1-8070-b471d18e4526-cilium-ipsec-secrets\") pod \"cilium-mf5lk\" (UID: \"e196b2ae-8d57-4ed1-8070-b471d18e4526\") " pod="kube-system/cilium-mf5lk" Dec 13 02:08:59.026680 kubelet[1421]: I1213 02:08:59.026530 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e196b2ae-8d57-4ed1-8070-b471d18e4526-host-proc-sys-kernel\") pod \"cilium-mf5lk\" (UID: \"e196b2ae-8d57-4ed1-8070-b471d18e4526\") " pod="kube-system/cilium-mf5lk" Dec 13 02:08:59.026680 kubelet[1421]: I1213 02:08:59.026549 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e196b2ae-8d57-4ed1-8070-b471d18e4526-bpf-maps\") pod \"cilium-mf5lk\" (UID: \"e196b2ae-8d57-4ed1-8070-b471d18e4526\") " pod="kube-system/cilium-mf5lk" Dec 13 02:08:59.026680 kubelet[1421]: I1213 02:08:59.026567 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e196b2ae-8d57-4ed1-8070-b471d18e4526-lib-modules\") pod \"cilium-mf5lk\" (UID: \"e196b2ae-8d57-4ed1-8070-b471d18e4526\") " pod="kube-system/cilium-mf5lk" Dec 13 02:08:59.026680 kubelet[1421]: I1213 02:08:59.026586 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e196b2ae-8d57-4ed1-8070-b471d18e4526-cilium-config-path\") pod \"cilium-mf5lk\" (UID: \"e196b2ae-8d57-4ed1-8070-b471d18e4526\") " pod="kube-system/cilium-mf5lk" Dec 13 02:08:59.026680 kubelet[1421]: I1213 02:08:59.026605 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e196b2ae-8d57-4ed1-8070-b471d18e4526-hubble-tls\") pod \"cilium-mf5lk\" (UID: \"e196b2ae-8d57-4ed1-8070-b471d18e4526\") " pod="kube-system/cilium-mf5lk" Dec 13 02:08:59.026832 kubelet[1421]: I1213 02:08:59.026630 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e196b2ae-8d57-4ed1-8070-b471d18e4526-host-proc-sys-net\") pod \"cilium-mf5lk\" (UID: \"e196b2ae-8d57-4ed1-8070-b471d18e4526\") " pod="kube-system/cilium-mf5lk" Dec 13 02:08:59.026832 kubelet[1421]: I1213 02:08:59.026667 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e196b2ae-8d57-4ed1-8070-b471d18e4526-hostproc\") pod \"cilium-mf5lk\" (UID: \"e196b2ae-8d57-4ed1-8070-b471d18e4526\") " pod="kube-system/cilium-mf5lk" Dec 13 02:08:59.026832 kubelet[1421]: I1213 02:08:59.026691 1421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e196b2ae-8d57-4ed1-8070-b471d18e4526-xtables-lock\") pod \"cilium-mf5lk\" (UID: \"e196b2ae-8d57-4ed1-8070-b471d18e4526\") " pod="kube-system/cilium-mf5lk" Dec 13 02:08:59.094236 kubelet[1421]: I1213 02:08:59.094118 1421 setters.go:580] "Node became not ready" node="10.0.0.138" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:08:59Z","lastTransitionTime":"2024-12-13T02:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 02:08:59.239754 kubelet[1421]: E1213 02:08:59.239724 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:59.240214 env[1205]: time="2024-12-13T02:08:59.240166312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mf5lk,Uid:e196b2ae-8d57-4ed1-8070-b471d18e4526,Namespace:kube-system,Attempt:0,}" Dec 13 02:08:59.251722 env[1205]: time="2024-12-13T02:08:59.251660495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:08:59.251722 env[1205]: time="2024-12-13T02:08:59.251692575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:08:59.251722 env[1205]: time="2024-12-13T02:08:59.251701983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:08:59.251878 env[1205]: time="2024-12-13T02:08:59.251818822Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e106f45922b37bbc2b712606b6b93f8d8582a0a6adc5057cfb39eb2c2bb1a904 pid=3030 runtime=io.containerd.runc.v2 Dec 13 02:08:59.261521 systemd[1]: Started cri-containerd-e106f45922b37bbc2b712606b6b93f8d8582a0a6adc5057cfb39eb2c2bb1a904.scope. Dec 13 02:08:59.280367 env[1205]: time="2024-12-13T02:08:59.279683816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mf5lk,Uid:e196b2ae-8d57-4ed1-8070-b471d18e4526,Namespace:kube-system,Attempt:0,} returns sandbox id \"e106f45922b37bbc2b712606b6b93f8d8582a0a6adc5057cfb39eb2c2bb1a904\"" Dec 13 02:08:59.280515 kubelet[1421]: E1213 02:08:59.280306 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:59.281851 env[1205]: time="2024-12-13T02:08:59.281811421Z" level=info msg="CreateContainer within sandbox \"e106f45922b37bbc2b712606b6b93f8d8582a0a6adc5057cfb39eb2c2bb1a904\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:08:59.293580 env[1205]: time="2024-12-13T02:08:59.293541827Z" level=info msg="CreateContainer within sandbox \"e106f45922b37bbc2b712606b6b93f8d8582a0a6adc5057cfb39eb2c2bb1a904\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9cfd0ecd9d1b4721dab280b2fbcb789af50e594bc0eab85d401c55784d3208fb\"" Dec 13 02:08:59.293858 env[1205]: time="2024-12-13T02:08:59.293816663Z" level=info msg="StartContainer for \"9cfd0ecd9d1b4721dab280b2fbcb789af50e594bc0eab85d401c55784d3208fb\"" Dec 13 02:08:59.306655 systemd[1]: Started cri-containerd-9cfd0ecd9d1b4721dab280b2fbcb789af50e594bc0eab85d401c55784d3208fb.scope. Dec 13 02:08:59.326488 env[1205]: time="2024-12-13T02:08:59.326446513Z" level=info msg="StartContainer for \"9cfd0ecd9d1b4721dab280b2fbcb789af50e594bc0eab85d401c55784d3208fb\" returns successfully" Dec 13 02:08:59.333155 systemd[1]: cri-containerd-9cfd0ecd9d1b4721dab280b2fbcb789af50e594bc0eab85d401c55784d3208fb.scope: Deactivated successfully. Dec 13 02:08:59.360482 env[1205]: time="2024-12-13T02:08:59.360364422Z" level=info msg="shim disconnected" id=9cfd0ecd9d1b4721dab280b2fbcb789af50e594bc0eab85d401c55784d3208fb Dec 13 02:08:59.360482 env[1205]: time="2024-12-13T02:08:59.360421429Z" level=warning msg="cleaning up after shim disconnected" id=9cfd0ecd9d1b4721dab280b2fbcb789af50e594bc0eab85d401c55784d3208fb namespace=k8s.io Dec 13 02:08:59.360482 env[1205]: time="2024-12-13T02:08:59.360429845Z" level=info msg="cleaning up dead shim" Dec 13 02:08:59.366839 env[1205]: time="2024-12-13T02:08:59.366795979Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:08:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3115 runtime=io.containerd.runc.v2\n" Dec 13 02:08:59.456897 kubelet[1421]: E1213 02:08:59.456851 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:59.772462 kubelet[1421]: I1213 02:08:59.772429 1421 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8491fb77-d293-4f41-94bc-e07abadbeb05" path="/var/lib/kubelet/pods/8491fb77-d293-4f41-94bc-e07abadbeb05/volumes" Dec 13 02:08:59.901457 kubelet[1421]: E1213 02:08:59.901429 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:59.902895 env[1205]: time="2024-12-13T02:08:59.902859972Z" level=info msg="CreateContainer within sandbox \"e106f45922b37bbc2b712606b6b93f8d8582a0a6adc5057cfb39eb2c2bb1a904\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:09:00.145099 env[1205]: time="2024-12-13T02:09:00.144989295Z" level=info msg="CreateContainer within sandbox \"e106f45922b37bbc2b712606b6b93f8d8582a0a6adc5057cfb39eb2c2bb1a904\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"76964c9628d769cdaf7dbe10d9b9d39d3e9c8c0b994c25a33cbdef938e98ff7e\"" Dec 13 02:09:00.145680 env[1205]: time="2024-12-13T02:09:00.145656368Z" level=info msg="StartContainer for \"76964c9628d769cdaf7dbe10d9b9d39d3e9c8c0b994c25a33cbdef938e98ff7e\"" Dec 13 02:09:00.160416 systemd[1]: Started cri-containerd-76964c9628d769cdaf7dbe10d9b9d39d3e9c8c0b994c25a33cbdef938e98ff7e.scope. Dec 13 02:09:00.180631 env[1205]: time="2024-12-13T02:09:00.180590991Z" level=info msg="StartContainer for \"76964c9628d769cdaf7dbe10d9b9d39d3e9c8c0b994c25a33cbdef938e98ff7e\" returns successfully" Dec 13 02:09:00.185336 systemd[1]: cri-containerd-76964c9628d769cdaf7dbe10d9b9d39d3e9c8c0b994c25a33cbdef938e98ff7e.scope: Deactivated successfully. Dec 13 02:09:00.201828 env[1205]: time="2024-12-13T02:09:00.201770908Z" level=info msg="shim disconnected" id=76964c9628d769cdaf7dbe10d9b9d39d3e9c8c0b994c25a33cbdef938e98ff7e Dec 13 02:09:00.201828 env[1205]: time="2024-12-13T02:09:00.201819249Z" level=warning msg="cleaning up after shim disconnected" id=76964c9628d769cdaf7dbe10d9b9d39d3e9c8c0b994c25a33cbdef938e98ff7e namespace=k8s.io Dec 13 02:09:00.201828 env[1205]: time="2024-12-13T02:09:00.201832273Z" level=info msg="cleaning up dead shim" Dec 13 02:09:00.207769 env[1205]: time="2024-12-13T02:09:00.207723715Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3176 runtime=io.containerd.runc.v2\n" Dec 13 02:09:00.457114 kubelet[1421]: E1213 02:09:00.457004 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:00.822513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76964c9628d769cdaf7dbe10d9b9d39d3e9c8c0b994c25a33cbdef938e98ff7e-rootfs.mount: Deactivated successfully. Dec 13 02:09:00.888856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4032712558.mount: Deactivated successfully. Dec 13 02:09:00.904027 kubelet[1421]: E1213 02:09:00.903985 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:00.905542 env[1205]: time="2024-12-13T02:09:00.905501901Z" level=info msg="CreateContainer within sandbox \"e106f45922b37bbc2b712606b6b93f8d8582a0a6adc5057cfb39eb2c2bb1a904\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:09:01.054950 env[1205]: time="2024-12-13T02:09:01.054893952Z" level=info msg="CreateContainer within sandbox \"e106f45922b37bbc2b712606b6b93f8d8582a0a6adc5057cfb39eb2c2bb1a904\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f5cccd76d1bac8fd54f5d3dc8ee93fc6da98eff7e83c79894806da7fd6158fac\"" Dec 13 02:09:01.055465 env[1205]: time="2024-12-13T02:09:01.055383932Z" level=info msg="StartContainer for \"f5cccd76d1bac8fd54f5d3dc8ee93fc6da98eff7e83c79894806da7fd6158fac\"" Dec 13 02:09:01.071204 systemd[1]: Started cri-containerd-f5cccd76d1bac8fd54f5d3dc8ee93fc6da98eff7e83c79894806da7fd6158fac.scope. Dec 13 02:09:01.092735 env[1205]: time="2024-12-13T02:09:01.092643805Z" level=info msg="StartContainer for \"f5cccd76d1bac8fd54f5d3dc8ee93fc6da98eff7e83c79894806da7fd6158fac\" returns successfully" Dec 13 02:09:01.093168 systemd[1]: cri-containerd-f5cccd76d1bac8fd54f5d3dc8ee93fc6da98eff7e83c79894806da7fd6158fac.scope: Deactivated successfully. Dec 13 02:09:01.114701 env[1205]: time="2024-12-13T02:09:01.114645823Z" level=info msg="shim disconnected" id=f5cccd76d1bac8fd54f5d3dc8ee93fc6da98eff7e83c79894806da7fd6158fac Dec 13 02:09:01.114701 env[1205]: time="2024-12-13T02:09:01.114696508Z" level=warning msg="cleaning up after shim disconnected" id=f5cccd76d1bac8fd54f5d3dc8ee93fc6da98eff7e83c79894806da7fd6158fac namespace=k8s.io Dec 13 02:09:01.114701 env[1205]: time="2024-12-13T02:09:01.114705444Z" level=info msg="cleaning up dead shim" Dec 13 02:09:01.120514 env[1205]: time="2024-12-13T02:09:01.120469686Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3232 runtime=io.containerd.runc.v2\n" Dec 13 02:09:01.457663 kubelet[1421]: E1213 02:09:01.457533 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:01.906333 kubelet[1421]: E1213 02:09:01.906308 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:01.907784 env[1205]: time="2024-12-13T02:09:01.907753396Z" level=info msg="CreateContainer within sandbox \"e106f45922b37bbc2b712606b6b93f8d8582a0a6adc5057cfb39eb2c2bb1a904\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:09:01.922615 env[1205]: time="2024-12-13T02:09:01.922565876Z" level=info msg="CreateContainer within sandbox \"e106f45922b37bbc2b712606b6b93f8d8582a0a6adc5057cfb39eb2c2bb1a904\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"718debac8d996ae98e9451a89ae308950fa2e09edb75387c7bc2cb91f87a506e\"" Dec 13 02:09:01.923040 env[1205]: time="2024-12-13T02:09:01.923000221Z" level=info msg="StartContainer for \"718debac8d996ae98e9451a89ae308950fa2e09edb75387c7bc2cb91f87a506e\"" Dec 13 02:09:01.938843 systemd[1]: Started cri-containerd-718debac8d996ae98e9451a89ae308950fa2e09edb75387c7bc2cb91f87a506e.scope. Dec 13 02:09:01.963314 systemd[1]: cri-containerd-718debac8d996ae98e9451a89ae308950fa2e09edb75387c7bc2cb91f87a506e.scope: Deactivated successfully. Dec 13 02:09:01.967420 env[1205]: time="2024-12-13T02:09:01.967382186Z" level=info msg="StartContainer for \"718debac8d996ae98e9451a89ae308950fa2e09edb75387c7bc2cb91f87a506e\" returns successfully" Dec 13 02:09:02.018160 env[1205]: time="2024-12-13T02:09:02.018115904Z" level=info msg="shim disconnected" id=718debac8d996ae98e9451a89ae308950fa2e09edb75387c7bc2cb91f87a506e Dec 13 02:09:02.018160 env[1205]: time="2024-12-13T02:09:02.018157402Z" level=warning msg="cleaning up after shim disconnected" id=718debac8d996ae98e9451a89ae308950fa2e09edb75387c7bc2cb91f87a506e namespace=k8s.io Dec 13 02:09:02.018160 env[1205]: time="2024-12-13T02:09:02.018168442Z" level=info msg="cleaning up dead shim" Dec 13 02:09:02.024644 env[1205]: time="2024-12-13T02:09:02.024582182Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3286 runtime=io.containerd.runc.v2\n" Dec 13 02:09:02.439857 env[1205]: time="2024-12-13T02:09:02.439783245Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:02.441794 env[1205]: time="2024-12-13T02:09:02.441734989Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:02.443374 env[1205]: time="2024-12-13T02:09:02.443345953Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:02.443847 env[1205]: time="2024-12-13T02:09:02.443820284Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:09:02.445777 env[1205]: time="2024-12-13T02:09:02.445754776Z" level=info msg="CreateContainer within sandbox \"40455edd2a85d6135657dd84d34da51ab0b2e7e3e212c48cc098f33ec0dd4a0f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:09:02.457076 env[1205]: time="2024-12-13T02:09:02.457040733Z" level=info msg="CreateContainer within sandbox \"40455edd2a85d6135657dd84d34da51ab0b2e7e3e212c48cc098f33ec0dd4a0f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8400320af02c67158b07882310a7ff807f072138bc60d0cbe2e1d8495cab0d29\"" Dec 13 02:09:02.457482 env[1205]: time="2024-12-13T02:09:02.457438029Z" level=info msg="StartContainer for \"8400320af02c67158b07882310a7ff807f072138bc60d0cbe2e1d8495cab0d29\"" Dec 13 02:09:02.457737 kubelet[1421]: E1213 02:09:02.457712 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:02.468980 systemd[1]: Started cri-containerd-8400320af02c67158b07882310a7ff807f072138bc60d0cbe2e1d8495cab0d29.scope. Dec 13 02:09:02.490305 env[1205]: time="2024-12-13T02:09:02.490264104Z" level=info msg="StartContainer for \"8400320af02c67158b07882310a7ff807f072138bc60d0cbe2e1d8495cab0d29\" returns successfully" Dec 13 02:09:02.758522 kubelet[1421]: E1213 02:09:02.758476 1421 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:09:02.823367 systemd[1]: run-containerd-runc-k8s.io-718debac8d996ae98e9451a89ae308950fa2e09edb75387c7bc2cb91f87a506e-runc.hmhfHZ.mount: Deactivated successfully. Dec 13 02:09:02.823451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-718debac8d996ae98e9451a89ae308950fa2e09edb75387c7bc2cb91f87a506e-rootfs.mount: Deactivated successfully. Dec 13 02:09:02.910736 kubelet[1421]: E1213 02:09:02.910702 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:02.911900 kubelet[1421]: E1213 02:09:02.911887 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:02.912466 env[1205]: time="2024-12-13T02:09:02.912430242Z" level=info msg="CreateContainer within sandbox \"e106f45922b37bbc2b712606b6b93f8d8582a0a6adc5057cfb39eb2c2bb1a904\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:09:03.012836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2621524933.mount: Deactivated successfully. Dec 13 02:09:03.014653 env[1205]: time="2024-12-13T02:09:03.014603401Z" level=info msg="CreateContainer within sandbox \"e106f45922b37bbc2b712606b6b93f8d8582a0a6adc5057cfb39eb2c2bb1a904\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"503f886bb7a1521b64937f2ff78a68b2c8b7368e128b8c1b2b17984e1c570fc3\"" Dec 13 02:09:03.015167 env[1205]: time="2024-12-13T02:09:03.015124950Z" level=info msg="StartContainer for \"503f886bb7a1521b64937f2ff78a68b2c8b7368e128b8c1b2b17984e1c570fc3\"" Dec 13 02:09:03.033976 systemd[1]: Started cri-containerd-503f886bb7a1521b64937f2ff78a68b2c8b7368e128b8c1b2b17984e1c570fc3.scope. Dec 13 02:09:03.458783 kubelet[1421]: E1213 02:09:03.458680 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:03.466341 env[1205]: time="2024-12-13T02:09:03.466274995Z" level=info msg="StartContainer for \"503f886bb7a1521b64937f2ff78a68b2c8b7368e128b8c1b2b17984e1c570fc3\" returns successfully" Dec 13 02:09:03.524238 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 02:09:03.823399 systemd[1]: run-containerd-runc-k8s.io-503f886bb7a1521b64937f2ff78a68b2c8b7368e128b8c1b2b17984e1c570fc3-runc.Z9AUpG.mount: Deactivated successfully. Dec 13 02:09:03.916864 kubelet[1421]: E1213 02:09:03.916827 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:03.917405 kubelet[1421]: E1213 02:09:03.917385 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:03.932544 kubelet[1421]: I1213 02:09:03.932477 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mf5lk" podStartSLOduration=5.932456867 podStartE2EDuration="5.932456867s" podCreationTimestamp="2024-12-13 02:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:09:03.93187257 +0000 UTC m=+76.846548313" watchObservedRunningTime="2024-12-13 02:09:03.932456867 +0000 UTC m=+76.847132600" Dec 13 02:09:03.932855 kubelet[1421]: I1213 02:09:03.932812 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-gvbtz" podStartSLOduration=2.683469332 podStartE2EDuration="6.93280472s" podCreationTimestamp="2024-12-13 02:08:57 +0000 UTC" firstStartedPulling="2024-12-13 02:08:58.195302892 +0000 UTC m=+71.109978625" lastFinishedPulling="2024-12-13 02:09:02.44463828 +0000 UTC m=+75.359314013" observedRunningTime="2024-12-13 02:09:03.010413235 +0000 UTC m=+75.925088968" watchObservedRunningTime="2024-12-13 02:09:03.93280472 +0000 UTC m=+76.847480483" Dec 13 02:09:04.459313 kubelet[1421]: E1213 02:09:04.459261 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:05.241330 kubelet[1421]: E1213 02:09:05.241290 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:05.459641 kubelet[1421]: E1213 02:09:05.459607 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:05.957810 systemd-networkd[1039]: lxc_health: Link UP Dec 13 02:09:05.969538 systemd-networkd[1039]: lxc_health: Gained carrier Dec 13 02:09:05.970345 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:09:06.098925 systemd[1]: run-containerd-runc-k8s.io-503f886bb7a1521b64937f2ff78a68b2c8b7368e128b8c1b2b17984e1c570fc3-runc.wATXWH.mount: Deactivated successfully. Dec 13 02:09:06.460279 kubelet[1421]: E1213 02:09:06.460245 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:07.242309 kubelet[1421]: E1213 02:09:07.242268 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:07.265015 systemd-networkd[1039]: lxc_health: Gained IPv6LL Dec 13 02:09:07.415125 kubelet[1421]: E1213 02:09:07.415062 1421 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:07.460453 kubelet[1421]: E1213 02:09:07.460398 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:07.923110 kubelet[1421]: E1213 02:09:07.923077 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:08.460832 kubelet[1421]: E1213 02:09:08.460776 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:08.924292 kubelet[1421]: E1213 02:09:08.924258 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:09.461621 kubelet[1421]: E1213 02:09:09.461560 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:10.277967 systemd[1]: run-containerd-runc-k8s.io-503f886bb7a1521b64937f2ff78a68b2c8b7368e128b8c1b2b17984e1c570fc3-runc.A5phIQ.mount: Deactivated successfully. Dec 13 02:09:10.461783 kubelet[1421]: E1213 02:09:10.461731 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:10.770584 kubelet[1421]: E1213 02:09:10.770539 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:11.462501 kubelet[1421]: E1213 02:09:11.462429 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:12.450788 systemd[1]: run-containerd-runc-k8s.io-503f886bb7a1521b64937f2ff78a68b2c8b7368e128b8c1b2b17984e1c570fc3-runc.faFtqA.mount: Deactivated successfully. Dec 13 02:09:12.463431 kubelet[1421]: E1213 02:09:12.463288 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:13.463724 kubelet[1421]: E1213 02:09:13.463665 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"