Dec 13 14:27:08.937319 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:27:08.937343 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:27:08.937353 kernel: BIOS-provided physical RAM map: Dec 13 14:27:08.937361 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:27:08.937367 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:27:08.937375 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:27:08.937383 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 14:27:08.937391 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 14:27:08.937400 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 14:27:08.937407 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 14:27:08.937414 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 14:27:08.937422 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:27:08.937429 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 14:27:08.937436 kernel: NX (Execute Disable) protection: active Dec 13 14:27:08.937447 kernel: SMBIOS 2.8 present. Dec 13 14:27:08.937455 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 14:27:08.937462 kernel: Hypervisor detected: KVM Dec 13 14:27:08.937470 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:27:08.937478 kernel: kvm-clock: cpu 0, msr 1f19a001, primary cpu clock Dec 13 14:27:08.937485 kernel: kvm-clock: using sched offset of 2617023273 cycles Dec 13 14:27:08.937494 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:27:08.937502 kernel: tsc: Detected 2794.748 MHz processor Dec 13 14:27:08.937510 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:27:08.937520 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:27:08.937528 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 14:27:08.937536 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:27:08.937544 kernel: Using GB pages for direct mapping Dec 13 14:27:08.937552 kernel: ACPI: Early table checksum verification disabled Dec 13 14:27:08.937560 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 14:27:08.937568 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:27:08.937576 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:27:08.937584 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:27:08.937594 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 14:27:08.937602 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:27:08.937610 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:27:08.937619 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:27:08.937627 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:27:08.937635 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 14:27:08.937643 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 14:27:08.937651 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 14:27:08.937664 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 14:27:08.937672 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 14:27:08.937681 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 14:27:08.937700 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 14:27:08.937708 kernel: No NUMA configuration found Dec 13 14:27:08.937717 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 14:27:08.937727 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 14:27:08.937736 kernel: Zone ranges: Dec 13 14:27:08.937744 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:27:08.937753 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 14:27:08.937762 kernel: Normal empty Dec 13 14:27:08.937770 kernel: Movable zone start for each node Dec 13 14:27:08.937779 kernel: Early memory node ranges Dec 13 14:27:08.937787 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:27:08.937796 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 14:27:08.937806 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 14:27:08.937815 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:27:08.937823 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:27:08.937832 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 14:27:08.937840 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 14:27:08.937849 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:27:08.937857 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:27:08.937866 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 14:27:08.937874 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:27:08.937883 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:27:08.937893 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:27:08.937902 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:27:08.937910 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:27:08.937919 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 14:27:08.937927 kernel: TSC deadline timer available Dec 13 14:27:08.937936 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 14:27:08.937944 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 14:27:08.937952 kernel: kvm-guest: setup PV sched yield Dec 13 14:27:08.937961 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 14:27:08.937984 kernel: Booting paravirtualized kernel on KVM Dec 13 14:27:08.937993 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:27:08.938001 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Dec 13 14:27:08.938010 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Dec 13 14:27:08.938019 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Dec 13 14:27:08.938027 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 14:27:08.938035 kernel: kvm-guest: setup async PF for cpu 0 Dec 13 14:27:08.938044 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Dec 13 14:27:08.938052 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:27:08.938063 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:27:08.938072 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 14:27:08.938080 kernel: Policy zone: DMA32 Dec 13 14:27:08.938090 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:27:08.938099 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:27:08.938108 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:27:08.938117 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:27:08.938125 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:27:08.938136 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 134796K reserved, 0K cma-reserved) Dec 13 14:27:08.938145 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 14:27:08.938153 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:27:08.938162 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:27:08.938170 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:27:08.938180 kernel: rcu: RCU event tracing is enabled. Dec 13 14:27:08.938188 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 14:27:08.938197 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:27:08.938205 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:27:08.938216 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:27:08.938225 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 14:27:08.938233 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 14:27:08.938241 kernel: random: crng init done Dec 13 14:27:08.938250 kernel: Console: colour VGA+ 80x25 Dec 13 14:27:08.938258 kernel: printk: console [ttyS0] enabled Dec 13 14:27:08.938267 kernel: ACPI: Core revision 20210730 Dec 13 14:27:08.938276 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 14:27:08.938284 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:27:08.938294 kernel: x2apic enabled Dec 13 14:27:08.938303 kernel: Switched APIC routing to physical x2apic. Dec 13 14:27:08.938311 kernel: kvm-guest: setup PV IPIs Dec 13 14:27:08.938320 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 14:27:08.938328 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 14:27:08.938337 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 14:27:08.938346 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 14:27:08.938355 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 14:27:08.938363 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 14:27:08.938379 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:27:08.938388 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:27:08.938397 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:27:08.938408 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:27:08.938417 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 14:27:08.938428 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 14:27:08.938437 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 14:27:08.938447 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 14:27:08.938459 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:27:08.938471 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:27:08.938481 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:27:08.938490 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:27:08.938508 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 14:27:08.938519 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:27:08.938536 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:27:08.938556 kernel: LSM: Security Framework initializing Dec 13 14:27:08.938570 kernel: SELinux: Initializing. Dec 13 14:27:08.938580 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:27:08.938589 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:27:08.938598 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 14:27:08.938607 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 14:27:08.938616 kernel: ... version: 0 Dec 13 14:27:08.938625 kernel: ... bit width: 48 Dec 13 14:27:08.938634 kernel: ... generic registers: 6 Dec 13 14:27:08.938643 kernel: ... value mask: 0000ffffffffffff Dec 13 14:27:08.938654 kernel: ... max period: 00007fffffffffff Dec 13 14:27:08.938663 kernel: ... fixed-purpose events: 0 Dec 13 14:27:08.938672 kernel: ... event mask: 000000000000003f Dec 13 14:27:08.938681 kernel: signal: max sigframe size: 1776 Dec 13 14:27:08.938699 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:27:08.938709 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:27:08.938718 kernel: x86: Booting SMP configuration: Dec 13 14:27:08.938727 kernel: .... node #0, CPUs: #1 Dec 13 14:27:08.938736 kernel: kvm-clock: cpu 1, msr 1f19a041, secondary cpu clock Dec 13 14:27:08.938749 kernel: kvm-guest: setup async PF for cpu 1 Dec 13 14:27:08.938760 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Dec 13 14:27:08.938769 kernel: #2 Dec 13 14:27:08.938778 kernel: kvm-clock: cpu 2, msr 1f19a081, secondary cpu clock Dec 13 14:27:08.938786 kernel: kvm-guest: setup async PF for cpu 2 Dec 13 14:27:08.938795 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Dec 13 14:27:08.938804 kernel: #3 Dec 13 14:27:08.938813 kernel: kvm-clock: cpu 3, msr 1f19a0c1, secondary cpu clock Dec 13 14:27:08.938822 kernel: kvm-guest: setup async PF for cpu 3 Dec 13 14:27:08.938831 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Dec 13 14:27:08.938842 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 14:27:08.938852 kernel: smpboot: Max logical packages: 1 Dec 13 14:27:08.938861 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 14:27:08.938870 kernel: devtmpfs: initialized Dec 13 14:27:08.938879 kernel: x86/mm: Memory block size: 128MB Dec 13 14:27:08.938889 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:27:08.938899 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 14:27:08.938909 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:27:08.938918 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:27:08.938930 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:27:08.938940 kernel: audit: type=2000 audit(1734100028.593:1): state=initialized audit_enabled=0 res=1 Dec 13 14:27:08.938949 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:27:08.938958 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:27:08.938980 kernel: cpuidle: using governor menu Dec 13 14:27:08.938990 kernel: ACPI: bus type PCI registered Dec 13 14:27:08.938999 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:27:08.939008 kernel: dca service started, version 1.12.1 Dec 13 14:27:08.939017 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 14:27:08.939030 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 14:27:08.939040 kernel: PCI: Using configuration type 1 for base access Dec 13 14:27:08.939050 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:27:08.939060 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:27:08.939069 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:27:08.939079 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:27:08.939089 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:27:08.939099 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:27:08.939108 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:27:08.939120 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:27:08.939130 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:27:08.939139 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:27:08.939149 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:27:08.939159 kernel: ACPI: Interpreter enabled Dec 13 14:27:08.939168 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 14:27:08.939178 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:27:08.939189 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:27:08.939199 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 14:27:08.939211 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:27:08.939360 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:27:08.939464 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 14:27:08.939563 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 14:27:08.939577 kernel: PCI host bridge to bus 0000:00 Dec 13 14:27:08.939681 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:27:08.939786 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:27:08.939880 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:27:08.939983 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 14:27:08.940077 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 14:27:08.940165 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 14:27:08.940256 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:27:08.940370 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 14:27:08.940509 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 14:27:08.940618 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 14:27:08.940754 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 14:27:08.940865 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 14:27:08.940965 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:27:08.941086 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:27:08.941175 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 14:27:08.941268 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 14:27:08.941359 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 14:27:08.941459 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 14:27:08.941546 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 14:27:08.941632 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 14:27:08.941736 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 14:27:08.941844 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 14:27:08.941938 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 14:27:08.942041 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 14:27:08.942129 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 14:27:08.942216 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 14:27:08.942313 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 14:27:08.942399 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 14:27:08.942491 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 14:27:08.942580 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 14:27:08.942675 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 14:27:08.942791 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 14:27:08.942891 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 14:27:08.942906 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:27:08.942917 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:27:08.942927 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:27:08.942939 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:27:08.942949 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 14:27:08.942958 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 14:27:08.942967 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 14:27:08.942989 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 14:27:08.942999 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 14:27:08.943008 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 14:27:08.943018 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 14:27:08.943028 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 14:27:08.943040 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 14:27:08.943050 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 14:27:08.943060 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 14:27:08.943070 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 14:27:08.943080 kernel: iommu: Default domain type: Translated Dec 13 14:27:08.943090 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:27:08.943192 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 14:27:08.943293 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:27:08.943398 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 14:27:08.943412 kernel: vgaarb: loaded Dec 13 14:27:08.943423 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:27:08.943433 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:27:08.943443 kernel: PTP clock support registered Dec 13 14:27:08.943453 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:27:08.943462 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:27:08.943472 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:27:08.943482 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 14:27:08.943495 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 14:27:08.943505 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 14:27:08.943514 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:27:08.943524 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:27:08.943534 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:27:08.943544 kernel: pnp: PnP ACPI init Dec 13 14:27:08.943647 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 14:27:08.943662 kernel: pnp: PnP ACPI: found 6 devices Dec 13 14:27:08.943673 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:27:08.943685 kernel: NET: Registered PF_INET protocol family Dec 13 14:27:08.943707 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:27:08.943717 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:27:08.943726 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:27:08.943736 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:27:08.943745 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:27:08.943755 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:27:08.943765 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:27:08.943777 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:27:08.943787 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:27:08.943797 kernel: NET: Registered PF_XDP protocol family Dec 13 14:27:08.943886 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:27:08.943986 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:27:08.944078 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:27:08.944163 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 14:27:08.944250 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 14:27:08.944334 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 14:27:08.944351 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:27:08.944362 kernel: Initialise system trusted keyrings Dec 13 14:27:08.944371 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:27:08.944381 kernel: Key type asymmetric registered Dec 13 14:27:08.944391 kernel: Asymmetric key parser 'x509' registered Dec 13 14:27:08.944401 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:27:08.944411 kernel: io scheduler mq-deadline registered Dec 13 14:27:08.944421 kernel: io scheduler kyber registered Dec 13 14:27:08.944431 kernel: io scheduler bfq registered Dec 13 14:27:08.944443 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:27:08.944454 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 14:27:08.944464 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 14:27:08.944474 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 14:27:08.944484 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:27:08.944494 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:27:08.944504 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:27:08.944513 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:27:08.944523 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:27:08.944630 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 14:27:08.944732 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 14:27:08.944747 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:27:08.944835 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T14:27:08 UTC (1734100028) Dec 13 14:27:08.944922 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 14:27:08.944935 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:27:08.944945 kernel: Segment Routing with IPv6 Dec 13 14:27:08.944955 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:27:08.944968 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:27:08.944991 kernel: Key type dns_resolver registered Dec 13 14:27:08.945000 kernel: IPI shorthand broadcast: enabled Dec 13 14:27:08.945011 kernel: sched_clock: Marking stable (420564493, 112581637)->(580804386, -47658256) Dec 13 14:27:08.945020 kernel: registered taskstats version 1 Dec 13 14:27:08.945030 kernel: Loading compiled-in X.509 certificates Dec 13 14:27:08.945040 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:27:08.945050 kernel: Key type .fscrypt registered Dec 13 14:27:08.945060 kernel: Key type fscrypt-provisioning registered Dec 13 14:27:08.945072 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:27:08.945082 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:27:08.945092 kernel: ima: No architecture policies found Dec 13 14:27:08.945101 kernel: clk: Disabling unused clocks Dec 13 14:27:08.945110 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:27:08.945120 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:27:08.945129 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:27:08.945139 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:27:08.945151 kernel: Run /init as init process Dec 13 14:27:08.945161 kernel: with arguments: Dec 13 14:27:08.945171 kernel: /init Dec 13 14:27:08.945181 kernel: with environment: Dec 13 14:27:08.945190 kernel: HOME=/ Dec 13 14:27:08.945200 kernel: TERM=linux Dec 13 14:27:08.945209 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:27:08.945222 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:27:08.945235 systemd[1]: Detected virtualization kvm. Dec 13 14:27:08.945248 systemd[1]: Detected architecture x86-64. Dec 13 14:27:08.945259 systemd[1]: Running in initrd. Dec 13 14:27:08.945269 systemd[1]: No hostname configured, using default hostname. Dec 13 14:27:08.945280 systemd[1]: Hostname set to . Dec 13 14:27:08.945291 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:27:08.945303 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:27:08.945314 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:27:08.945326 systemd[1]: Reached target cryptsetup.target. Dec 13 14:27:08.945339 systemd[1]: Reached target paths.target. Dec 13 14:27:08.945358 systemd[1]: Reached target slices.target. Dec 13 14:27:08.945370 systemd[1]: Reached target swap.target. Dec 13 14:27:08.945381 systemd[1]: Reached target timers.target. Dec 13 14:27:08.945392 systemd[1]: Listening on iscsid.socket. Dec 13 14:27:08.945405 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:27:08.945416 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:27:08.945427 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:27:08.945438 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:27:08.945449 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:27:08.945461 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:27:08.945471 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:27:08.945481 systemd[1]: Reached target sockets.target. Dec 13 14:27:08.945492 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:27:08.945504 systemd[1]: Finished network-cleanup.service. Dec 13 14:27:08.945515 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:27:08.945526 systemd[1]: Starting systemd-journald.service... Dec 13 14:27:08.945537 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:27:08.945549 systemd[1]: Starting systemd-resolved.service... Dec 13 14:27:08.945560 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:27:08.945572 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:27:08.945583 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:27:08.945596 systemd-journald[198]: Journal started Dec 13 14:27:08.945650 systemd-journald[198]: Runtime Journal (/run/log/journal/d0490c01f81b40aab7b9fbe72514cd25) is 6.0M, max 48.5M, 42.5M free. Dec 13 14:27:08.936349 systemd-modules-load[199]: Inserted module 'overlay' Dec 13 14:27:08.979103 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:27:08.979135 kernel: Bridge firewalling registered Dec 13 14:27:08.964633 systemd-resolved[200]: Positive Trust Anchors: Dec 13 14:27:08.983861 systemd[1]: Started systemd-journald.service. Dec 13 14:27:08.983881 kernel: audit: type=1130 audit(1734100028.979:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:08.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:08.964643 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:27:08.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:08.964670 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:27:08.999298 kernel: audit: type=1130 audit(1734100028.986:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:08.999315 kernel: audit: type=1130 audit(1734100028.990:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:08.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:08.966853 systemd-resolved[200]: Defaulting to hostname 'linux'. Dec 13 14:27:09.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:08.978125 systemd-modules-load[199]: Inserted module 'br_netfilter' Dec 13 14:27:09.006846 kernel: audit: type=1130 audit(1734100029.001:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:09.006868 kernel: SCSI subsystem initialized Dec 13 14:27:08.986273 systemd[1]: Started systemd-resolved.service. Dec 13 14:27:08.996653 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:27:09.001445 systemd[1]: Reached target nss-lookup.target. Dec 13 14:27:09.010471 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:27:09.013124 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:27:09.018073 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:27:09.018109 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:27:09.019411 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:27:09.019949 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:27:09.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:09.025447 kernel: audit: type=1130 audit(1734100029.019:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:09.025768 systemd-modules-load[199]: Inserted module 'dm_multipath' Dec 13 14:27:09.026432 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:27:09.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:09.027331 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:27:09.030990 kernel: audit: type=1130 audit(1734100029.025:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:09.039171 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:27:09.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:09.040586 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:27:09.048138 kernel: audit: type=1130 audit(1734100029.040:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:09.048166 kernel: audit: type=1130 audit(1734100029.044:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:09.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:09.048951 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:27:09.058102 dracut-cmdline[221]: dracut-dracut-053 Dec 13 14:27:09.060631 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:27:09.120078 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:27:09.141018 kernel: iscsi: registered transport (tcp) Dec 13 14:27:09.163033 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:27:09.163115 kernel: QLogic iSCSI HBA Driver Dec 13 14:27:09.183246 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:27:09.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:09.186018 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:27:09.189701 kernel: audit: type=1130 audit(1734100029.184:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:09.230002 kernel: raid6: avx2x4 gen() 29594 MB/s Dec 13 14:27:09.246996 kernel: raid6: avx2x4 xor() 7409 MB/s Dec 13 14:27:09.263999 kernel: raid6: avx2x2 gen() 31327 MB/s Dec 13 14:27:09.281012 kernel: raid6: avx2x2 xor() 17146 MB/s Dec 13 14:27:09.298031 kernel: raid6: avx2x1 gen() 21118 MB/s Dec 13 14:27:09.315008 kernel: raid6: avx2x1 xor() 12848 MB/s Dec 13 14:27:09.331995 kernel: raid6: sse2x4 gen() 14110 MB/s Dec 13 14:27:09.348991 kernel: raid6: sse2x4 xor() 6930 MB/s Dec 13 14:27:09.365990 kernel: raid6: sse2x2 gen() 16236 MB/s Dec 13 14:27:09.383007 kernel: raid6: sse2x2 xor() 9010 MB/s Dec 13 14:27:09.399993 kernel: raid6: sse2x1 gen() 10926 MB/s Dec 13 14:27:09.417544 kernel: raid6: sse2x1 xor() 7352 MB/s Dec 13 14:27:09.417619 kernel: raid6: using algorithm avx2x2 gen() 31327 MB/s Dec 13 14:27:09.417641 kernel: raid6: .... xor() 17146 MB/s, rmw enabled Dec 13 14:27:09.418305 kernel: raid6: using avx2x2 recovery algorithm Dec 13 14:27:09.432006 kernel: xor: automatically using best checksumming function avx Dec 13 14:27:09.526009 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:27:09.534606 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:27:09.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:09.535000 audit: BPF prog-id=7 op=LOAD Dec 13 14:27:09.535000 audit: BPF prog-id=8 op=LOAD Dec 13 14:27:09.536834 systemd[1]: Starting systemd-udevd.service... Dec 13 14:27:09.548483 systemd-udevd[399]: Using default interface naming scheme 'v252'. Dec 13 14:27:09.552247 systemd[1]: Started systemd-udevd.service. Dec 13 14:27:09.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:09.552965 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:27:09.563712 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Dec 13 14:27:09.585688 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:27:09.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:09.588257 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:27:09.635771 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:27:09.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:09.673916 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:27:09.673965 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 14:27:09.693448 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:27:09.693464 kernel: libata version 3.00 loaded. Dec 13 14:27:09.693474 kernel: AES CTR mode by8 optimization enabled Dec 13 14:27:09.693487 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:27:09.693496 kernel: GPT:9289727 != 19775487 Dec 13 14:27:09.693504 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:27:09.693513 kernel: GPT:9289727 != 19775487 Dec 13 14:27:09.693521 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:27:09.693529 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:27:09.756998 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 14:27:09.771935 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 14:27:09.771964 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 14:27:09.772123 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 14:27:09.772202 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (449) Dec 13 14:27:09.772215 kernel: scsi host0: ahci Dec 13 14:27:09.772329 kernel: scsi host1: ahci Dec 13 14:27:09.772432 kernel: scsi host2: ahci Dec 13 14:27:09.772512 kernel: scsi host3: ahci Dec 13 14:27:09.772607 kernel: scsi host4: ahci Dec 13 14:27:09.772700 kernel: scsi host5: ahci Dec 13 14:27:09.772779 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 14:27:09.772789 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 14:27:09.772798 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 14:27:09.772809 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 14:27:09.772820 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 14:27:09.772834 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 14:27:09.770263 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:27:09.817701 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:27:09.817815 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:27:09.824387 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:27:09.827439 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:27:09.829406 systemd[1]: Starting disk-uuid.service... Dec 13 14:27:09.886845 disk-uuid[547]: Primary Header is updated. Dec 13 14:27:09.886845 disk-uuid[547]: Secondary Entries is updated. Dec 13 14:27:09.886845 disk-uuid[547]: Secondary Header is updated. Dec 13 14:27:09.891280 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:27:09.894002 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:27:09.896994 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:27:10.088016 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 14:27:10.088094 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 14:27:10.088105 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 14:27:10.090019 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 14:27:10.091017 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 14:27:10.092037 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 14:27:10.093340 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 14:27:10.093413 kernel: ata3.00: applying bridge limits Dec 13 14:27:10.095152 kernel: ata3.00: configured for UDMA/100 Dec 13 14:27:10.096005 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 14:27:10.125599 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 14:27:10.143011 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:27:10.143032 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 14:27:10.897808 disk-uuid[548]: The operation has completed successfully. Dec 13 14:27:10.899270 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:27:10.917882 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:27:10.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:10.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:10.917986 systemd[1]: Finished disk-uuid.service. Dec 13 14:27:10.927579 systemd[1]: Starting verity-setup.service... Dec 13 14:27:10.940993 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 14:27:10.963545 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:27:10.964339 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:27:10.968075 systemd[1]: Finished verity-setup.service. Dec 13 14:27:10.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.035996 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:27:11.036180 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:27:11.036395 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:27:11.037850 systemd[1]: Starting ignition-setup.service... Dec 13 14:27:11.041994 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:27:11.054966 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:27:11.055035 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:27:11.055045 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:27:11.066138 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:27:11.075656 systemd[1]: Finished ignition-setup.service. Dec 13 14:27:11.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.076559 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:27:11.125381 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:27:11.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.126000 audit: BPF prog-id=9 op=LOAD Dec 13 14:27:11.127615 systemd[1]: Starting systemd-networkd.service... Dec 13 14:27:11.174624 ignition[658]: Ignition 2.14.0 Dec 13 14:27:11.174635 ignition[658]: Stage: fetch-offline Dec 13 14:27:11.174696 ignition[658]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:27:11.174704 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:27:11.174819 ignition[658]: parsed url from cmdline: "" Dec 13 14:27:11.174822 ignition[658]: no config URL provided Dec 13 14:27:11.174827 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:27:11.174833 ignition[658]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:27:11.174853 ignition[658]: op(1): [started] loading QEMU firmware config module Dec 13 14:27:11.174858 ignition[658]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 14:27:11.179903 ignition[658]: op(1): [finished] loading QEMU firmware config module Dec 13 14:27:11.183143 ignition[658]: parsing config with SHA512: 3496260c018c6d9bc2c9b9796cabbadc436ce7163465de6693f49d90d89fd972d9f91adaff9f96763d104ab3edb34a87ab92036d1bc004ae49b575598398e51a Dec 13 14:27:11.183831 systemd-networkd[727]: lo: Link UP Dec 13 14:27:11.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.183835 systemd-networkd[727]: lo: Gained carrier Dec 13 14:27:11.184300 systemd-networkd[727]: Enumeration completed Dec 13 14:27:11.184407 systemd[1]: Started systemd-networkd.service. Dec 13 14:27:11.184571 systemd-networkd[727]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:27:11.185939 systemd[1]: Reached target network.target. Dec 13 14:27:11.186035 systemd-networkd[727]: eth0: Link UP Dec 13 14:27:11.186039 systemd-networkd[727]: eth0: Gained carrier Dec 13 14:27:11.187789 systemd[1]: Starting iscsiuio.service... Dec 13 14:27:11.201794 unknown[658]: fetched base config from "system" Dec 13 14:27:11.201967 unknown[658]: fetched user config from "qemu" Dec 13 14:27:11.203895 ignition[658]: fetch-offline: fetch-offline passed Dec 13 14:27:11.204803 ignition[658]: Ignition finished successfully Dec 13 14:27:11.206626 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:27:11.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.208410 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 14:27:11.209157 systemd[1]: Starting ignition-kargs.service... Dec 13 14:27:11.215934 systemd[1]: Started iscsiuio.service. Dec 13 14:27:11.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.217064 systemd-networkd[727]: eth0: DHCPv4 address 10.0.0.107/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:27:11.219832 systemd[1]: Starting iscsid.service... Dec 13 14:27:11.225278 iscsid[740]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:27:11.225278 iscsid[740]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 14:27:11.225278 iscsid[740]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:27:11.225278 iscsid[740]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:27:11.225278 iscsid[740]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:27:11.225278 iscsid[740]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:27:11.225278 iscsid[740]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:27:11.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.225325 systemd[1]: Started iscsid.service. Dec 13 14:27:11.240560 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:27:11.254083 ignition[733]: Ignition 2.14.0 Dec 13 14:27:11.254435 ignition[733]: Stage: kargs Dec 13 14:27:11.254562 ignition[733]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:27:11.254571 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:27:11.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.256563 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:27:11.255346 ignition[733]: kargs: kargs passed Dec 13 14:27:11.258038 systemd[1]: Finished ignition-kargs.service. Dec 13 14:27:11.255381 ignition[733]: Ignition finished successfully Dec 13 14:27:11.259961 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:27:11.260838 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:27:11.262682 systemd[1]: Reached target remote-fs.target. Dec 13 14:27:11.264264 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:27:11.266508 systemd[1]: Starting ignition-disks.service... Dec 13 14:27:11.272588 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:27:11.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.278571 ignition[750]: Ignition 2.14.0 Dec 13 14:27:11.278583 ignition[750]: Stage: disks Dec 13 14:27:11.278689 ignition[750]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:27:11.278700 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:27:11.279480 ignition[750]: disks: disks passed Dec 13 14:27:11.279522 ignition[750]: Ignition finished successfully Dec 13 14:27:11.284946 systemd[1]: Finished ignition-disks.service. Dec 13 14:27:11.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.285925 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:27:11.287584 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:27:11.288493 systemd[1]: Reached target local-fs.target. Dec 13 14:27:11.291100 systemd[1]: Reached target sysinit.target. Dec 13 14:27:11.292020 systemd[1]: Reached target basic.target. Dec 13 14:27:11.294068 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:27:11.319880 systemd-fsck[762]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 14:27:11.457919 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:27:11.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.465340 systemd[1]: Mounting sysroot.mount... Dec 13 14:27:11.493768 systemd[1]: Mounted sysroot.mount. Dec 13 14:27:11.495988 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:27:11.494547 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:27:11.496912 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:27:11.506656 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:27:11.506696 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:27:11.506719 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:27:11.508730 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:27:11.510792 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:27:11.534753 initrd-setup-root[772]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:27:11.537506 initrd-setup-root[780]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:27:11.541286 initrd-setup-root[788]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:27:11.544153 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:27:11.573466 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:27:11.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.576730 systemd[1]: Starting ignition-mount.service... Dec 13 14:27:11.578162 systemd[1]: Starting sysroot-boot.service... Dec 13 14:27:11.584741 bash[813]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 14:27:11.637685 systemd[1]: Finished sysroot-boot.service. Dec 13 14:27:11.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.644156 ignition[815]: INFO : Ignition 2.14.0 Dec 13 14:27:11.644156 ignition[815]: INFO : Stage: mount Dec 13 14:27:11.654601 ignition[815]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:27:11.654601 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:27:11.654601 ignition[815]: INFO : mount: mount passed Dec 13 14:27:11.654601 ignition[815]: INFO : Ignition finished successfully Dec 13 14:27:11.659007 systemd[1]: Finished ignition-mount.service. Dec 13 14:27:11.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.979105 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:27:11.985988 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (823) Dec 13 14:27:11.988374 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:27:11.988394 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:27:11.988403 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:27:11.993288 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:27:11.995079 systemd[1]: Starting ignition-files.service... Dec 13 14:27:12.012994 ignition[843]: INFO : Ignition 2.14.0 Dec 13 14:27:12.012994 ignition[843]: INFO : Stage: files Dec 13 14:27:12.014846 ignition[843]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:27:12.014846 ignition[843]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:27:12.017957 ignition[843]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:27:12.019495 ignition[843]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:27:12.019495 ignition[843]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:27:12.022910 ignition[843]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:27:12.024476 ignition[843]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:27:12.026553 unknown[843]: wrote ssh authorized keys file for user: core Dec 13 14:27:12.027683 ignition[843]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:27:12.029255 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:27:12.031220 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:27:12.033143 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:27:12.035133 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:27:12.037068 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:27:12.039767 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:27:12.039767 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:27:12.045025 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 14:27:12.380199 systemd-networkd[727]: eth0: Gained IPv6LL Dec 13 14:27:12.447106 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 14:27:13.385910 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:27:13.385910 ignition[843]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Dec 13 14:27:13.391923 ignition[843]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:27:13.391923 ignition[843]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:27:13.391923 ignition[843]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Dec 13 14:27:13.391923 ignition[843]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 14:27:13.391923 ignition[843]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:27:13.454728 ignition[843]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:27:13.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.458507 ignition[843]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 14:27:13.458507 ignition[843]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:27:13.458507 ignition[843]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:27:13.458507 ignition[843]: INFO : files: files passed Dec 13 14:27:13.458507 ignition[843]: INFO : Ignition finished successfully Dec 13 14:27:13.479814 kernel: kauditd_printk_skb: 24 callbacks suppressed Dec 13 14:27:13.479843 kernel: audit: type=1130 audit(1734100033.458:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.479855 kernel: audit: type=1130 audit(1734100033.470:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.479865 kernel: audit: type=1131 audit(1734100033.470:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.456342 systemd[1]: Finished ignition-files.service. Dec 13 14:27:13.459309 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:27:13.462451 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:27:13.485094 initrd-setup-root-after-ignition[869]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 14:27:13.463294 systemd[1]: Starting ignition-quench.service... Dec 13 14:27:13.487180 initrd-setup-root-after-ignition[871]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:27:13.494874 kernel: audit: type=1130 audit(1734100033.489:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.467858 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:27:13.467935 systemd[1]: Finished ignition-quench.service. Dec 13 14:27:13.485198 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:27:13.489674 systemd[1]: Reached target ignition-complete.target. Dec 13 14:27:13.497565 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:27:13.522138 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:27:13.522230 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:27:13.532469 kernel: audit: type=1130 audit(1734100033.524:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.532493 kernel: audit: type=1131 audit(1734100033.524:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.524776 systemd[1]: Reached target initrd-fs.target. Dec 13 14:27:13.533577 systemd[1]: Reached target initrd.target. Dec 13 14:27:13.534611 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:27:13.536459 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:27:13.553321 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:27:13.559002 kernel: audit: type=1130 audit(1734100033.552:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.559104 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:27:13.571726 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:27:13.571947 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:27:13.575155 systemd[1]: Stopped target timers.target. Dec 13 14:27:13.577248 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:27:13.583823 kernel: audit: type=1131 audit(1734100033.579:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.577386 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:27:13.579331 systemd[1]: Stopped target initrd.target. Dec 13 14:27:13.584934 systemd[1]: Stopped target basic.target. Dec 13 14:27:13.586924 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:27:13.589086 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:27:13.591227 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:27:13.593529 systemd[1]: Stopped target remote-fs.target. Dec 13 14:27:13.595757 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:27:13.597994 systemd[1]: Stopped target sysinit.target. Dec 13 14:27:13.600040 systemd[1]: Stopped target local-fs.target. Dec 13 14:27:13.602186 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:27:13.604487 systemd[1]: Stopped target swap.target. Dec 13 14:27:13.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.606397 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:27:13.615242 kernel: audit: type=1131 audit(1734100033.607:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.606500 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:27:13.620597 kernel: audit: type=1131 audit(1734100033.615:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.608625 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:27:13.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.614038 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:27:13.614127 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:27:13.616481 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:27:13.616569 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:27:13.621984 systemd[1]: Stopped target paths.target. Dec 13 14:27:13.623898 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:27:13.628666 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:27:13.630903 systemd[1]: Stopped target slices.target. Dec 13 14:27:13.633273 systemd[1]: Stopped target sockets.target. Dec 13 14:27:13.635538 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:27:13.635618 systemd[1]: Closed iscsid.socket. Dec 13 14:27:13.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.637355 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:27:13.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.637413 systemd[1]: Closed iscsiuio.socket. Dec 13 14:27:13.639242 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:27:13.639348 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:27:13.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.641720 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:27:13.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.641823 systemd[1]: Stopped ignition-files.service. Dec 13 14:27:13.644411 systemd[1]: Stopping ignition-mount.service... Dec 13 14:27:13.646599 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:27:13.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.657815 ignition[884]: INFO : Ignition 2.14.0 Dec 13 14:27:13.657815 ignition[884]: INFO : Stage: umount Dec 13 14:27:13.657815 ignition[884]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:27:13.657815 ignition[884]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:27:13.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.648357 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:27:13.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.667282 ignition[884]: INFO : umount: umount passed Dec 13 14:27:13.667282 ignition[884]: INFO : Ignition finished successfully Dec 13 14:27:13.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.648487 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:27:13.648674 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:27:13.648756 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:27:13.655009 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:27:13.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.655121 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:27:13.659444 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:27:13.659524 systemd[1]: Stopped ignition-mount.service. Dec 13 14:27:13.660236 systemd[1]: Stopped target network.target. Dec 13 14:27:13.662488 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:27:13.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.662549 systemd[1]: Stopped ignition-disks.service. Dec 13 14:27:13.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.665307 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:27:13.665355 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:27:13.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.668345 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:27:13.668433 systemd[1]: Stopped ignition-setup.service. Dec 13 14:27:13.669516 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:27:13.672561 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:27:13.674009 systemd-networkd[727]: eth0: DHCPv6 lease lost Dec 13 14:27:13.699000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:27:13.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.676067 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:27:13.676167 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:27:13.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.679505 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:27:13.708000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:27:13.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.679604 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:27:13.681469 systemd[1]: Stopping network-cleanup.service... Dec 13 14:27:13.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.683830 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:27:13.683877 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:27:13.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.687104 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:27:13.687145 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:27:13.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.690393 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:27:13.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.690427 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:27:13.692824 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:27:13.698989 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:27:13.699400 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:27:13.699743 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:27:13.704897 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:27:13.705065 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:27:13.707629 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:27:13.707697 systemd[1]: Stopped network-cleanup.service. Dec 13 14:27:13.710877 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:27:13.711216 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:27:13.711247 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:27:13.712559 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:27:13.712594 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:27:13.713739 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:27:13.713770 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:27:13.716065 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:27:13.716098 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:27:13.718662 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:27:13.718696 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:27:13.722111 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:27:13.723800 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:27:13.723838 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:27:13.725232 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:27:13.725265 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:27:13.727364 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:27:13.727396 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:27:13.729331 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:27:13.729728 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:27:13.729794 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:27:13.802909 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:27:13.803013 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:27:13.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.805389 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:27:13.807473 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:27:13.807509 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:27:13.809293 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:27:13.825431 systemd[1]: Switching root. Dec 13 14:27:13.845133 iscsid[740]: iscsid shutting down. Dec 13 14:27:13.846194 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Dec 13 14:27:13.846257 systemd-journald[198]: Journal stopped Dec 13 14:27:18.006881 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:27:18.006928 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:27:18.006940 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:27:18.006949 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:27:18.006961 kernel: SELinux: policy capability open_perms=1 Dec 13 14:27:18.006982 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:27:18.006991 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:27:18.007003 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:27:18.007015 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:27:18.007026 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:27:18.007039 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:27:18.007051 systemd[1]: Successfully loaded SELinux policy in 38.408ms. Dec 13 14:27:18.007066 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.952ms. Dec 13 14:27:18.007077 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:27:18.007088 systemd[1]: Detected virtualization kvm. Dec 13 14:27:18.007098 systemd[1]: Detected architecture x86-64. Dec 13 14:27:18.007108 systemd[1]: Detected first boot. Dec 13 14:27:18.007118 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:27:18.007129 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:27:18.007139 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:27:18.007149 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:27:18.007160 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:27:18.007172 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:27:18.007182 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:27:18.007192 systemd[1]: Stopped iscsiuio.service. Dec 13 14:27:18.007204 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:27:18.007214 systemd[1]: Stopped iscsid.service. Dec 13 14:27:18.007224 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:27:18.007234 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:27:18.007244 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:27:18.007255 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:27:18.007265 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:27:18.007277 systemd[1]: Created slice system-getty.slice. Dec 13 14:27:18.007288 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:27:18.007298 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:27:18.007308 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:27:18.007319 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:27:18.007329 systemd[1]: Created slice user.slice. Dec 13 14:27:18.007339 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:27:18.007349 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:27:18.007359 systemd[1]: Set up automount boot.automount. Dec 13 14:27:18.007371 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:27:18.007383 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:27:18.007393 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:27:18.007404 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:27:18.007414 systemd[1]: Reached target integritysetup.target. Dec 13 14:27:18.007424 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:27:18.007435 systemd[1]: Reached target remote-fs.target. Dec 13 14:27:18.007445 systemd[1]: Reached target slices.target. Dec 13 14:27:18.007455 systemd[1]: Reached target swap.target. Dec 13 14:27:18.007465 systemd[1]: Reached target torcx.target. Dec 13 14:27:18.007475 systemd[1]: Reached target veritysetup.target. Dec 13 14:27:18.007495 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:27:18.007506 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:27:18.007516 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:27:18.007526 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:27:18.007538 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:27:18.007548 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:27:18.007558 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:27:18.007568 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:27:18.007578 systemd[1]: Mounting media.mount... Dec 13 14:27:18.007588 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:18.007598 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:27:18.007608 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:27:18.007618 systemd[1]: Mounting tmp.mount... Dec 13 14:27:18.007630 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:27:18.007641 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:18.007651 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:27:18.007661 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:27:18.007671 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:18.007681 systemd[1]: Starting modprobe@drm.service... Dec 13 14:27:18.007693 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:18.007703 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:27:18.007713 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:18.007728 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:27:18.007739 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:27:18.007749 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:27:18.007759 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:27:18.007769 kernel: fuse: init (API version 7.34) Dec 13 14:27:18.007779 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:27:18.007789 kernel: loop: module loaded Dec 13 14:27:18.007798 systemd[1]: Stopped systemd-journald.service. Dec 13 14:27:18.007809 systemd[1]: Starting systemd-journald.service... Dec 13 14:27:18.007819 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:27:18.007833 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:27:18.007846 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:27:18.007858 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:27:18.007871 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:27:18.007884 systemd[1]: Stopped verity-setup.service. Dec 13 14:27:18.007896 systemd-journald[1006]: Journal started Dec 13 14:27:18.007932 systemd-journald[1006]: Runtime Journal (/run/log/journal/d0490c01f81b40aab7b9fbe72514cd25) is 6.0M, max 48.5M, 42.5M free. Dec 13 14:27:13.904000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:27:14.202000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:27:14.202000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:27:14.202000 audit: BPF prog-id=10 op=LOAD Dec 13 14:27:14.202000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:27:14.202000 audit: BPF prog-id=11 op=LOAD Dec 13 14:27:14.202000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:27:14.236000 audit[917]: AVC avc: denied { associate } for pid=917 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:27:14.236000 audit[917]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=900 pid=917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:14.236000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:27:14.238000 audit[917]: AVC avc: denied { associate } for pid=917 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:27:14.238000 audit[917]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179b9 a2=1ed a3=0 items=2 ppid=900 pid=917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:14.238000 audit: CWD cwd="/" Dec 13 14:27:14.238000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:14.238000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:14.238000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:27:17.847000 audit: BPF prog-id=12 op=LOAD Dec 13 14:27:17.847000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:27:17.847000 audit: BPF prog-id=13 op=LOAD Dec 13 14:27:17.847000 audit: BPF prog-id=14 op=LOAD Dec 13 14:27:17.847000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:27:17.847000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:27:17.848000 audit: BPF prog-id=15 op=LOAD Dec 13 14:27:17.848000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:27:17.849000 audit: BPF prog-id=16 op=LOAD Dec 13 14:27:17.849000 audit: BPF prog-id=17 op=LOAD Dec 13 14:27:17.849000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:27:17.849000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:27:17.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.865000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:27:17.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.988000 audit: BPF prog-id=18 op=LOAD Dec 13 14:27:17.988000 audit: BPF prog-id=19 op=LOAD Dec 13 14:27:17.988000 audit: BPF prog-id=20 op=LOAD Dec 13 14:27:17.988000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:27:17.988000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:27:18.004000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:27:18.004000 audit[1006]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffca89860f0 a2=4000 a3=7ffca898618c items=0 ppid=1 pid=1006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:18.004000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:27:18.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.846579 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:27:14.235244 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:27:17.846591 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 14:27:14.235502 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:14Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:27:17.851119 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:27:14.235526 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:14Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:27:14.235562 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:14Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:27:14.235588 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:14Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:27:14.235630 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:14Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:27:14.235647 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:14Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:27:14.235919 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:14Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:27:14.235967 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:14Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:27:14.236002 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:14Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:27:14.236496 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:14Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:27:14.236548 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:14Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:27:14.236588 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:14Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:27:14.236608 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:14Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:27:14.236635 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:14Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:27:14.236654 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:14Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:27:17.476378 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:17Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:27:17.477067 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:17Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:27:17.477218 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:17Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:27:17.477396 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:17Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:27:17.477445 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:17Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:27:17.477529 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2024-12-13T14:27:17Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:27:18.010994 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:18.014090 systemd[1]: Started systemd-journald.service. Dec 13 14:27:18.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.014751 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:27:18.015665 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:27:18.016517 systemd[1]: Mounted media.mount. Dec 13 14:27:18.017315 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:27:18.018225 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:27:18.019170 systemd[1]: Mounted tmp.mount. Dec 13 14:27:18.020107 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:27:18.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.021206 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:27:18.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.022312 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:27:18.022431 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:27:18.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.023561 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:18.023689 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:18.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.024803 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:27:18.024928 systemd[1]: Finished modprobe@drm.service. Dec 13 14:27:18.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.026123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:18.026243 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:18.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.027340 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:27:18.027456 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:27:18.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.028804 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:18.028915 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:18.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.030045 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:27:18.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.031179 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:27:18.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.032429 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:27:18.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.033741 systemd[1]: Reached target network-pre.target. Dec 13 14:27:18.035660 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:27:18.037570 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:27:18.038530 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:27:18.040248 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:27:18.042226 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:27:18.043279 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:18.044211 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:27:18.045248 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:27:18.046095 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:27:18.048065 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:27:18.049278 systemd-journald[1006]: Time spent on flushing to /var/log/journal/d0490c01f81b40aab7b9fbe72514cd25 is 155.250ms for 1086 entries. Dec 13 14:27:18.049278 systemd-journald[1006]: System Journal (/var/log/journal/d0490c01f81b40aab7b9fbe72514cd25) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:27:18.222886 systemd-journald[1006]: Received client request to flush runtime journal. Dec 13 14:27:18.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.052521 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:27:18.053908 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:27:18.223649 udevadm[1024]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:27:18.056430 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:27:18.057770 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:27:18.067900 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:27:18.070433 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:27:18.073422 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:27:18.079168 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:27:18.209573 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:27:18.211154 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:27:18.223981 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:27:18.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.686583 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:27:18.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.688781 kernel: kauditd_printk_skb: 101 callbacks suppressed Dec 13 14:27:18.688854 kernel: audit: type=1130 audit(1734100038.686:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.691000 audit: BPF prog-id=21 op=LOAD Dec 13 14:27:18.693555 kernel: audit: type=1334 audit(1734100038.691:138): prog-id=21 op=LOAD Dec 13 14:27:18.693623 kernel: audit: type=1334 audit(1734100038.692:139): prog-id=22 op=LOAD Dec 13 14:27:18.692000 audit: BPF prog-id=22 op=LOAD Dec 13 14:27:18.694492 systemd[1]: Starting systemd-udevd.service... Dec 13 14:27:18.694619 kernel: audit: type=1334 audit(1734100038.692:140): prog-id=7 op=UNLOAD Dec 13 14:27:18.694654 kernel: audit: type=1334 audit(1734100038.692:141): prog-id=8 op=UNLOAD Dec 13 14:27:18.692000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:27:18.692000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:27:18.713562 systemd-udevd[1026]: Using default interface naming scheme 'v252'. Dec 13 14:27:18.727556 systemd[1]: Started systemd-udevd.service. Dec 13 14:27:18.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.732997 kernel: audit: type=1130 audit(1734100038.727:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.733000 audit: BPF prog-id=23 op=LOAD Dec 13 14:27:18.734889 systemd[1]: Starting systemd-networkd.service... Dec 13 14:27:18.736007 kernel: audit: type=1334 audit(1734100038.733:143): prog-id=23 op=LOAD Dec 13 14:27:18.746705 kernel: audit: type=1334 audit(1734100038.742:144): prog-id=24 op=LOAD Dec 13 14:27:18.746827 kernel: audit: type=1334 audit(1734100038.743:145): prog-id=25 op=LOAD Dec 13 14:27:18.746856 kernel: audit: type=1334 audit(1734100038.744:146): prog-id=26 op=LOAD Dec 13 14:27:18.742000 audit: BPF prog-id=24 op=LOAD Dec 13 14:27:18.743000 audit: BPF prog-id=25 op=LOAD Dec 13 14:27:18.744000 audit: BPF prog-id=26 op=LOAD Dec 13 14:27:18.746671 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:27:18.757831 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:27:18.783619 systemd[1]: Started systemd-userdbd.service. Dec 13 14:27:18.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.791337 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:27:18.811000 audit[1027]: AVC avc: denied { confidentiality } for pid=1027 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:27:18.817005 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 14:27:18.811000 audit[1027]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5624c22c5af0 a1=337fc a2=7f26df060bc5 a3=5 items=110 ppid=1026 pid=1027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:18.811000 audit: CWD cwd="/" Dec 13 14:27:18.811000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=1 name=(null) inode=11977 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=2 name=(null) inode=11977 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=3 name=(null) inode=11978 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=4 name=(null) inode=11977 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=5 name=(null) inode=11979 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=6 name=(null) inode=11977 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=7 name=(null) inode=11980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=8 name=(null) inode=11980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=9 name=(null) inode=11981 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=10 name=(null) inode=11980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=11 name=(null) inode=11982 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=12 name=(null) inode=11980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=13 name=(null) inode=11983 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=14 name=(null) inode=11980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=15 name=(null) inode=11984 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=16 name=(null) inode=11980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=17 name=(null) inode=11985 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=18 name=(null) inode=11977 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=19 name=(null) inode=11986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=20 name=(null) inode=11986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=21 name=(null) inode=11987 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=22 name=(null) inode=11986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=23 name=(null) inode=11988 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=24 name=(null) inode=11986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=25 name=(null) inode=11989 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=26 name=(null) inode=11986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=27 name=(null) inode=11990 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=28 name=(null) inode=11986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=29 name=(null) inode=11991 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=30 name=(null) inode=11977 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=31 name=(null) inode=11992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=32 name=(null) inode=11992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=33 name=(null) inode=11993 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=34 name=(null) inode=11992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=35 name=(null) inode=11994 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=36 name=(null) inode=11992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=37 name=(null) inode=11995 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=38 name=(null) inode=11992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=39 name=(null) inode=11996 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=40 name=(null) inode=11992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=41 name=(null) inode=11997 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=42 name=(null) inode=11977 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=43 name=(null) inode=11998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=44 name=(null) inode=11998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=45 name=(null) inode=11999 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=46 name=(null) inode=11998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=47 name=(null) inode=12000 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=48 name=(null) inode=11998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=49 name=(null) inode=12001 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=50 name=(null) inode=11998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=51 name=(null) inode=12002 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=52 name=(null) inode=11998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=53 name=(null) inode=12003 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=55 name=(null) inode=12004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=56 name=(null) inode=12004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=57 name=(null) inode=12005 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=58 name=(null) inode=12004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=59 name=(null) inode=12006 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=60 name=(null) inode=12004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=61 name=(null) inode=12007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=62 name=(null) inode=12007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=63 name=(null) inode=12008 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=64 name=(null) inode=12007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=65 name=(null) inode=12009 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=66 name=(null) inode=12007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=67 name=(null) inode=12010 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=68 name=(null) inode=12007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=69 name=(null) inode=12011 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=70 name=(null) inode=12007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=71 name=(null) inode=12012 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=72 name=(null) inode=12004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=73 name=(null) inode=12013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=74 name=(null) inode=12013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=75 name=(null) inode=12014 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=76 name=(null) inode=12013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=77 name=(null) inode=12015 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=78 name=(null) inode=12013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=79 name=(null) inode=12016 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=80 name=(null) inode=12013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=81 name=(null) inode=12017 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=82 name=(null) inode=12013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=83 name=(null) inode=12018 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=84 name=(null) inode=12004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=85 name=(null) inode=12019 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=86 name=(null) inode=12019 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=87 name=(null) inode=12020 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=88 name=(null) inode=12019 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=89 name=(null) inode=12021 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=90 name=(null) inode=12019 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=91 name=(null) inode=12022 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=92 name=(null) inode=12019 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=93 name=(null) inode=12023 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=94 name=(null) inode=12019 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=95 name=(null) inode=12024 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=96 name=(null) inode=12004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=97 name=(null) inode=12025 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=98 name=(null) inode=12025 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=99 name=(null) inode=12026 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=100 name=(null) inode=12025 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=101 name=(null) inode=12027 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=102 name=(null) inode=12025 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=103 name=(null) inode=12028 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=104 name=(null) inode=12025 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=105 name=(null) inode=12029 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=106 name=(null) inode=12025 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=107 name=(null) inode=12030 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PATH item=109 name=(null) inode=12031 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:18.811000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:27:18.836682 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 14:27:18.836732 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:27:18.857715 systemd-networkd[1043]: lo: Link UP Dec 13 14:27:18.858069 systemd-networkd[1043]: lo: Gained carrier Dec 13 14:27:18.858753 systemd-networkd[1043]: Enumeration completed Dec 13 14:27:18.858938 systemd[1]: Started systemd-networkd.service. Dec 13 14:27:18.859065 systemd-networkd[1043]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:27:18.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.895223 systemd-networkd[1043]: eth0: Link UP Dec 13 14:27:18.895230 systemd-networkd[1043]: eth0: Gained carrier Dec 13 14:27:18.900246 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 14:27:18.900567 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 14:27:18.900720 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 14:27:18.913002 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:27:18.937134 systemd-networkd[1043]: eth0: DHCPv4 address 10.0.0.107/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:27:18.943020 kernel: kvm: Nested Virtualization enabled Dec 13 14:27:18.943137 kernel: SVM: kvm: Nested Paging enabled Dec 13 14:27:18.943173 kernel: SVM: Virtual VMLOAD VMSAVE supported Dec 13 14:27:18.943203 kernel: SVM: Virtual GIF supported Dec 13 14:27:18.967042 kernel: EDAC MC: Ver: 3.0.0 Dec 13 14:27:18.991368 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:27:18.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.993636 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:27:19.001688 lvm[1061]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:27:19.028289 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:27:19.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.029421 systemd[1]: Reached target cryptsetup.target. Dec 13 14:27:19.031284 systemd[1]: Starting lvm2-activation.service... Dec 13 14:27:19.035920 lvm[1062]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:27:19.062063 systemd[1]: Finished lvm2-activation.service. Dec 13 14:27:19.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.063098 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:27:19.064085 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:27:19.064107 systemd[1]: Reached target local-fs.target. Dec 13 14:27:19.064989 systemd[1]: Reached target machines.target. Dec 13 14:27:19.066827 systemd[1]: Starting ldconfig.service... Dec 13 14:27:19.067910 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:19.067947 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:19.068764 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:27:19.070634 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:27:19.073368 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:27:19.076097 systemd[1]: Starting systemd-sysext.service... Dec 13 14:27:19.078368 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1064 (bootctl) Dec 13 14:27:19.079628 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:27:19.117635 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 14:27:19.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.088826 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:27:19.093243 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:27:19.099086 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:27:19.099304 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:27:19.120818 systemd-fsck[1071]: fsck.fat 4.2 (2021-01-31) Dec 13 14:27:19.120818 systemd-fsck[1071]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 14:27:19.121838 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:27:19.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.125797 systemd[1]: Mounting boot.mount... Dec 13 14:27:19.279326 systemd[1]: Mounted boot.mount. Dec 13 14:27:19.290998 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:27:19.291601 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:27:19.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.294296 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:27:19.295018 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:27:19.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.310037 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 14:27:19.314137 (sd-sysext)[1077]: Using extensions 'kubernetes'. Dec 13 14:27:19.314591 (sd-sysext)[1077]: Merged extensions into '/usr'. Dec 13 14:27:19.330704 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:19.333860 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:27:19.335096 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:19.336357 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:19.338355 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:19.341059 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:19.342334 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:19.342505 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:19.342672 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:19.346185 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:27:19.347758 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:19.347910 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:19.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.349876 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:19.350058 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:19.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.352345 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:19.352492 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:19.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.354472 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:19.354620 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:27:19.355892 systemd[1]: Finished systemd-sysext.service. Dec 13 14:27:19.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.359217 systemd[1]: Starting ensure-sysext.service... Dec 13 14:27:19.368648 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:27:19.372068 systemd[1]: Reloading. Dec 13 14:27:19.380896 systemd-tmpfiles[1084]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:27:19.381842 systemd-tmpfiles[1084]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:27:19.384883 systemd-tmpfiles[1084]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:27:19.404804 ldconfig[1063]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:27:19.463633 /usr/lib/systemd/system-generators/torcx-generator[1103]: time="2024-12-13T14:27:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:27:19.463664 /usr/lib/systemd/system-generators/torcx-generator[1103]: time="2024-12-13T14:27:19Z" level=info msg="torcx already run" Dec 13 14:27:19.535924 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:27:19.535944 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:27:19.553715 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:27:19.612000 audit: BPF prog-id=27 op=LOAD Dec 13 14:27:19.612000 audit: BPF prog-id=28 op=LOAD Dec 13 14:27:19.612000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:27:19.612000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:27:19.613000 audit: BPF prog-id=29 op=LOAD Dec 13 14:27:19.614000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:27:19.614000 audit: BPF prog-id=30 op=LOAD Dec 13 14:27:19.614000 audit: BPF prog-id=31 op=LOAD Dec 13 14:27:19.614000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:27:19.614000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:27:19.614000 audit: BPF prog-id=32 op=LOAD Dec 13 14:27:19.614000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:27:19.614000 audit: BPF prog-id=33 op=LOAD Dec 13 14:27:19.614000 audit: BPF prog-id=34 op=LOAD Dec 13 14:27:19.614000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:27:19.614000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:27:19.616000 audit: BPF prog-id=35 op=LOAD Dec 13 14:27:19.616000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:27:19.619611 systemd[1]: Finished ldconfig.service. Dec 13 14:27:19.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.621531 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:27:19.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.625105 systemd[1]: Starting audit-rules.service... Dec 13 14:27:19.626795 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:27:19.628817 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:27:19.629000 audit: BPF prog-id=36 op=LOAD Dec 13 14:27:19.631292 systemd[1]: Starting systemd-resolved.service... Dec 13 14:27:19.631000 audit: BPF prog-id=37 op=LOAD Dec 13 14:27:19.633360 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:27:19.635168 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:27:19.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.638369 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:27:19.639823 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:27:19.641964 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:19.642000 audit[1158]: SYSTEM_BOOT pid=1158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.662000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:27:19.662000 audit[1168]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd379e6c20 a2=420 a3=0 items=0 ppid=1147 pid=1168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:19.662000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:27:19.677241 augenrules[1168]: No rules Dec 13 14:27:19.677154 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:19.680615 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:19.682561 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:19.683846 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:19.683954 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:19.684069 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:27:19.684967 systemd[1]: Finished audit-rules.service. Dec 13 14:27:19.686347 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:27:19.687881 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:19.688108 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:19.689517 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:19.689657 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:19.691135 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:19.691267 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:19.696198 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:19.697498 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:19.699338 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:19.701320 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:19.702219 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:19.702348 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:19.703651 systemd[1]: Starting systemd-update-done.service... Dec 13 14:27:19.704676 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:27:19.705640 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:27:19.707015 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:19.707141 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:19.708402 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:19.708522 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:19.709872 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:19.710079 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:19.711513 systemd[1]: Finished systemd-update-done.service. Dec 13 14:27:19.713900 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:19.714155 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:27:19.716621 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:19.718033 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:19.720352 systemd[1]: Starting modprobe@drm.service... Dec 13 14:27:19.722157 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:19.724008 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:19.725072 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:19.725196 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:19.726421 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:27:19.727616 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:27:19.729488 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:19.729680 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:19.744729 systemd-resolved[1151]: Positive Trust Anchors: Dec 13 14:27:19.744769 systemd-resolved[1151]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:27:19.744804 systemd-resolved[1151]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:27:19.745236 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:27:19.745592 systemd[1]: Finished modprobe@drm.service. Dec 13 14:27:19.747897 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:19.748160 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:19.749609 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:19.749832 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:19.753833 systemd[1]: Finished ensure-sysext.service. Dec 13 14:27:19.755474 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:19.755553 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:27:19.760603 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:27:19.760608 systemd-resolved[1151]: Defaulting to hostname 'linux'. Dec 13 14:27:19.761735 systemd[1]: Reached target time-set.target. Dec 13 14:27:20.325268 systemd-timesyncd[1157]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 14:27:20.325314 systemd-timesyncd[1157]: Initial clock synchronization to Fri 2024-12-13 14:27:20.325186 UTC. Dec 13 14:27:20.325599 systemd-resolved[1151]: Clock change detected. Flushing caches. Dec 13 14:27:20.326086 systemd[1]: Started systemd-resolved.service. Dec 13 14:27:20.327161 systemd[1]: Reached target network.target. Dec 13 14:27:20.328053 systemd[1]: Reached target nss-lookup.target. Dec 13 14:27:20.328910 systemd[1]: Reached target sysinit.target. Dec 13 14:27:20.329816 systemd[1]: Started motdgen.path. Dec 13 14:27:20.330596 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:27:20.331965 systemd[1]: Started logrotate.timer. Dec 13 14:27:20.332887 systemd[1]: Started mdadm.timer. Dec 13 14:27:20.333583 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:27:20.334470 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:27:20.334496 systemd[1]: Reached target paths.target. Dec 13 14:27:20.335280 systemd[1]: Reached target timers.target. Dec 13 14:27:20.336408 systemd[1]: Listening on dbus.socket. Dec 13 14:27:20.347094 systemd[1]: Starting docker.socket... Dec 13 14:27:20.350245 systemd[1]: Listening on sshd.socket. Dec 13 14:27:20.351092 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:20.351468 systemd[1]: Listening on docker.socket. Dec 13 14:27:20.352312 systemd[1]: Reached target sockets.target. Dec 13 14:27:20.353147 systemd[1]: Reached target basic.target. Dec 13 14:27:20.353960 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:27:20.353991 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:27:20.355113 systemd[1]: Starting containerd.service... Dec 13 14:27:20.357132 systemd[1]: Starting dbus.service... Dec 13 14:27:20.359206 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:27:20.361447 systemd[1]: Starting extend-filesystems.service... Dec 13 14:27:20.362672 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:27:20.364197 systemd[1]: Starting motdgen.service... Dec 13 14:27:20.368059 jq[1190]: false Dec 13 14:27:20.379298 dbus-daemon[1189]: [system] SELinux support is enabled Dec 13 14:27:20.382050 extend-filesystems[1191]: Found loop1 Dec 13 14:27:20.383324 extend-filesystems[1191]: Found sr0 Dec 13 14:27:20.384287 extend-filesystems[1191]: Found vda Dec 13 14:27:20.385332 extend-filesystems[1191]: Found vda1 Dec 13 14:27:20.385332 extend-filesystems[1191]: Found vda2 Dec 13 14:27:20.385332 extend-filesystems[1191]: Found vda3 Dec 13 14:27:20.385332 extend-filesystems[1191]: Found usr Dec 13 14:27:20.385332 extend-filesystems[1191]: Found vda4 Dec 13 14:27:20.385332 extend-filesystems[1191]: Found vda6 Dec 13 14:27:20.385332 extend-filesystems[1191]: Found vda7 Dec 13 14:27:20.385332 extend-filesystems[1191]: Found vda9 Dec 13 14:27:20.385332 extend-filesystems[1191]: Checking size of /dev/vda9 Dec 13 14:27:20.395588 extend-filesystems[1191]: Resized partition /dev/vda9 Dec 13 14:27:20.394445 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:27:20.396095 extend-filesystems[1206]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:27:20.400403 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 14:27:20.400517 systemd[1]: Starting sshd-keygen.service... Dec 13 14:27:20.403758 systemd[1]: Starting systemd-logind.service... Dec 13 14:27:20.405637 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:20.405760 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:27:20.406501 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:27:20.407598 systemd[1]: Starting update-engine.service... Dec 13 14:27:20.411793 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:27:20.413871 systemd[1]: Started dbus.service. Dec 13 14:27:20.419410 jq[1212]: true Dec 13 14:27:20.420645 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:27:20.420945 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:27:20.421500 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:27:20.421727 systemd[1]: Finished motdgen.service. Dec 13 14:27:20.423338 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:27:20.423517 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:27:20.454487 jq[1215]: true Dec 13 14:27:20.461384 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 14:27:20.463747 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:27:20.463779 systemd[1]: Reached target system-config.target. Dec 13 14:27:20.464992 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:27:20.465008 systemd[1]: Reached target user-config.target. Dec 13 14:27:20.489416 extend-filesystems[1206]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:27:20.489416 extend-filesystems[1206]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:27:20.489416 extend-filesystems[1206]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 14:27:20.506950 extend-filesystems[1191]: Resized filesystem in /dev/vda9 Dec 13 14:27:20.508628 bash[1232]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:27:20.490328 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:27:20.509224 env[1216]: time="2024-12-13T14:27:20.507103641Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:27:20.490598 systemd[1]: Finished extend-filesystems.service. Dec 13 14:27:20.509768 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:27:20.516745 update_engine[1211]: I1213 14:27:20.516507 1211 main.cc:92] Flatcar Update Engine starting Dec 13 14:27:20.517506 systemd-logind[1208]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:27:20.517748 systemd-logind[1208]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:27:20.518113 systemd-logind[1208]: New seat seat0. Dec 13 14:27:20.519619 systemd[1]: Started update-engine.service. Dec 13 14:27:20.520431 update_engine[1211]: I1213 14:27:20.519748 1211 update_check_scheduler.cc:74] Next update check in 11m40s Dec 13 14:27:20.523507 systemd[1]: Started locksmithd.service. Dec 13 14:27:20.525127 systemd[1]: Started systemd-logind.service. Dec 13 14:27:20.544750 env[1216]: time="2024-12-13T14:27:20.544575044Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:27:20.544750 env[1216]: time="2024-12-13T14:27:20.544744351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:20.546513 env[1216]: time="2024-12-13T14:27:20.546437366Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:27:20.546585 env[1216]: time="2024-12-13T14:27:20.546518188Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:20.546917 env[1216]: time="2024-12-13T14:27:20.546877602Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:27:20.546973 env[1216]: time="2024-12-13T14:27:20.546917216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:20.546973 env[1216]: time="2024-12-13T14:27:20.546942764Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:27:20.546973 env[1216]: time="2024-12-13T14:27:20.546958063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:20.547169 env[1216]: time="2024-12-13T14:27:20.547125737Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:20.547698 env[1216]: time="2024-12-13T14:27:20.547670238Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:20.547951 env[1216]: time="2024-12-13T14:27:20.547918343Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:27:20.548005 env[1216]: time="2024-12-13T14:27:20.547947819Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:27:20.548068 env[1216]: time="2024-12-13T14:27:20.548037657Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:27:20.548141 env[1216]: time="2024-12-13T14:27:20.548068635Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:27:20.556001 env[1216]: time="2024-12-13T14:27:20.555948833Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:27:20.556001 env[1216]: time="2024-12-13T14:27:20.555986204Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:27:20.556001 env[1216]: time="2024-12-13T14:27:20.555999589Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:27:20.557013 env[1216]: time="2024-12-13T14:27:20.556059040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:27:20.557013 env[1216]: time="2024-12-13T14:27:20.556075190Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:27:20.557013 env[1216]: time="2024-12-13T14:27:20.556134091Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:27:20.557013 env[1216]: time="2024-12-13T14:27:20.556147175Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:27:20.557013 env[1216]: time="2024-12-13T14:27:20.556160430Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:27:20.557013 env[1216]: time="2024-12-13T14:27:20.556173194Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:27:20.557013 env[1216]: time="2024-12-13T14:27:20.556186038Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:27:20.557013 env[1216]: time="2024-12-13T14:27:20.556198041Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:27:20.557013 env[1216]: time="2024-12-13T14:27:20.556208921Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:27:20.557013 env[1216]: time="2024-12-13T14:27:20.556335990Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:27:20.557013 env[1216]: time="2024-12-13T14:27:20.556447088Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:27:20.557013 env[1216]: time="2024-12-13T14:27:20.556700413Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:27:20.557013 env[1216]: time="2024-12-13T14:27:20.556728395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:27:20.557013 env[1216]: time="2024-12-13T14:27:20.556740338Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:27:20.557338 env[1216]: time="2024-12-13T14:27:20.556790161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:27:20.557338 env[1216]: time="2024-12-13T14:27:20.556802945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:27:20.557338 env[1216]: time="2024-12-13T14:27:20.556814928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:27:20.557338 env[1216]: time="2024-12-13T14:27:20.556825748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:27:20.557338 env[1216]: time="2024-12-13T14:27:20.556848651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:27:20.557338 env[1216]: time="2024-12-13T14:27:20.556860213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:27:20.557338 env[1216]: time="2024-12-13T14:27:20.556872265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:27:20.557338 env[1216]: time="2024-12-13T14:27:20.556882224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:27:20.557338 env[1216]: time="2024-12-13T14:27:20.556894297Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:27:20.557338 env[1216]: time="2024-12-13T14:27:20.557021475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:27:20.557338 env[1216]: time="2024-12-13T14:27:20.557036062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:27:20.557338 env[1216]: time="2024-12-13T14:27:20.557050680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:27:20.557338 env[1216]: time="2024-12-13T14:27:20.557062221Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:27:20.557338 env[1216]: time="2024-12-13T14:27:20.557086457Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:27:20.557794 env[1216]: time="2024-12-13T14:27:20.557096496Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:27:20.557794 env[1216]: time="2024-12-13T14:27:20.557117175Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:27:20.557794 env[1216]: time="2024-12-13T14:27:20.557153994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:27:20.557868 env[1216]: time="2024-12-13T14:27:20.557425914Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:27:20.557868 env[1216]: time="2024-12-13T14:27:20.557489753Z" level=info msg="Connect containerd service" Dec 13 14:27:20.557868 env[1216]: time="2024-12-13T14:27:20.557541470Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:27:20.559394 env[1216]: time="2024-12-13T14:27:20.558273182Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:27:20.559394 env[1216]: time="2024-12-13T14:27:20.558407374Z" level=info msg="Start subscribing containerd event" Dec 13 14:27:20.559394 env[1216]: time="2024-12-13T14:27:20.558506169Z" level=info msg="Start recovering state" Dec 13 14:27:20.559394 env[1216]: time="2024-12-13T14:27:20.558583114Z" level=info msg="Start event monitor" Dec 13 14:27:20.559394 env[1216]: time="2024-12-13T14:27:20.558602039Z" level=info msg="Start snapshots syncer" Dec 13 14:27:20.559394 env[1216]: time="2024-12-13T14:27:20.558614663Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:27:20.559394 env[1216]: time="2024-12-13T14:27:20.558623710Z" level=info msg="Start streaming server" Dec 13 14:27:20.559394 env[1216]: time="2024-12-13T14:27:20.558985819Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:27:20.559394 env[1216]: time="2024-12-13T14:27:20.559020774Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:27:20.559176 systemd[1]: Started containerd.service. Dec 13 14:27:20.560531 env[1216]: time="2024-12-13T14:27:20.560362962Z" level=info msg="containerd successfully booted in 0.054064s" Dec 13 14:27:20.577935 locksmithd[1242]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:27:20.663265 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:20.663328 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:21.135613 systemd-networkd[1043]: eth0: Gained IPv6LL Dec 13 14:27:21.137493 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:27:21.142324 systemd[1]: Reached target network-online.target. Dec 13 14:27:21.145093 systemd[1]: Starting kubelet.service... Dec 13 14:27:21.518708 sshd_keygen[1213]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:27:21.578773 systemd[1]: Finished sshd-keygen.service. Dec 13 14:27:21.581840 systemd[1]: Starting issuegen.service... Dec 13 14:27:21.587335 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:27:21.587522 systemd[1]: Finished issuegen.service. Dec 13 14:27:21.590221 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:27:21.597951 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:27:21.600899 systemd[1]: Started getty@tty1.service. Dec 13 14:27:21.603186 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:27:21.604759 systemd[1]: Reached target getty.target. Dec 13 14:27:22.760643 systemd[1]: Started kubelet.service. Dec 13 14:27:22.762494 systemd[1]: Reached target multi-user.target. Dec 13 14:27:22.766270 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:27:22.773461 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:27:22.773647 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:27:22.775257 systemd[1]: Startup finished in 684ms (kernel) + 5.092s (initrd) + 8.346s (userspace) = 14.123s. Dec 13 14:27:23.366580 kubelet[1266]: E1213 14:27:23.366511 1266 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:27:23.368311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:27:23.368441 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:27:23.368657 systemd[1]: kubelet.service: Consumed 2.080s CPU time. Dec 13 14:27:29.920111 systemd[1]: Created slice system-sshd.slice. Dec 13 14:27:29.921173 systemd[1]: Started sshd@0-10.0.0.107:22-10.0.0.1:45048.service. Dec 13 14:27:29.962366 sshd[1276]: Accepted publickey for core from 10.0.0.1 port 45048 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:27:29.963814 sshd[1276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:29.971485 systemd[1]: Created slice user-500.slice. Dec 13 14:27:29.972456 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:27:29.973894 systemd-logind[1208]: New session 1 of user core. Dec 13 14:27:29.979676 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:27:29.980968 systemd[1]: Starting user@500.service... Dec 13 14:27:29.983540 (systemd)[1279]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:30.064246 systemd[1279]: Queued start job for default target default.target. Dec 13 14:27:30.064716 systemd[1279]: Reached target paths.target. Dec 13 14:27:30.064735 systemd[1279]: Reached target sockets.target. Dec 13 14:27:30.064746 systemd[1279]: Reached target timers.target. Dec 13 14:27:30.064756 systemd[1279]: Reached target basic.target. Dec 13 14:27:30.064799 systemd[1279]: Reached target default.target. Dec 13 14:27:30.064834 systemd[1279]: Startup finished in 76ms. Dec 13 14:27:30.064893 systemd[1]: Started user@500.service. Dec 13 14:27:30.065908 systemd[1]: Started session-1.scope. Dec 13 14:27:30.117755 systemd[1]: Started sshd@1-10.0.0.107:22-10.0.0.1:45060.service. Dec 13 14:27:30.156671 sshd[1288]: Accepted publickey for core from 10.0.0.1 port 45060 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:27:30.158101 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:30.162328 systemd-logind[1208]: New session 2 of user core. Dec 13 14:27:30.163529 systemd[1]: Started session-2.scope. Dec 13 14:27:30.217720 sshd[1288]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:30.221733 systemd[1]: Started sshd@2-10.0.0.107:22-10.0.0.1:45074.service. Dec 13 14:27:30.222280 systemd[1]: sshd@1-10.0.0.107:22-10.0.0.1:45060.service: Deactivated successfully. Dec 13 14:27:30.223023 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:27:30.223606 systemd-logind[1208]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:27:30.224445 systemd-logind[1208]: Removed session 2. Dec 13 14:27:30.262734 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 45074 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:27:30.264214 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:30.268426 systemd-logind[1208]: New session 3 of user core. Dec 13 14:27:30.269515 systemd[1]: Started session-3.scope. Dec 13 14:27:30.321687 sshd[1293]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:30.324490 systemd[1]: sshd@2-10.0.0.107:22-10.0.0.1:45074.service: Deactivated successfully. Dec 13 14:27:30.324989 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:27:30.325586 systemd-logind[1208]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:27:30.326599 systemd[1]: Started sshd@3-10.0.0.107:22-10.0.0.1:45090.service. Dec 13 14:27:30.327338 systemd-logind[1208]: Removed session 3. Dec 13 14:27:30.368232 sshd[1300]: Accepted publickey for core from 10.0.0.1 port 45090 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:27:30.369547 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:30.374357 systemd-logind[1208]: New session 4 of user core. Dec 13 14:27:30.375360 systemd[1]: Started session-4.scope. Dec 13 14:27:30.433403 sshd[1300]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:30.436977 systemd[1]: sshd@3-10.0.0.107:22-10.0.0.1:45090.service: Deactivated successfully. Dec 13 14:27:30.437682 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:27:30.438306 systemd-logind[1208]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:27:30.439325 systemd[1]: Started sshd@4-10.0.0.107:22-10.0.0.1:45092.service. Dec 13 14:27:30.440320 systemd-logind[1208]: Removed session 4. Dec 13 14:27:30.482001 sshd[1306]: Accepted publickey for core from 10.0.0.1 port 45092 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:27:30.483275 sshd[1306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:30.487293 systemd-logind[1208]: New session 5 of user core. Dec 13 14:27:30.488171 systemd[1]: Started session-5.scope. Dec 13 14:27:30.544933 sudo[1309]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:27:30.545142 sudo[1309]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:27:30.558063 systemd[1]: Starting coreos-metadata.service... Dec 13 14:27:30.565329 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 14:27:30.565504 systemd[1]: Finished coreos-metadata.service. Dec 13 14:27:31.864408 systemd[1]: Stopped kubelet.service. Dec 13 14:27:31.864654 systemd[1]: kubelet.service: Consumed 2.080s CPU time. Dec 13 14:27:31.867676 systemd[1]: Starting kubelet.service... Dec 13 14:27:31.887831 systemd[1]: Reloading. Dec 13 14:27:32.019715 /usr/lib/systemd/system-generators/torcx-generator[1373]: time="2024-12-13T14:27:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:27:32.019738 /usr/lib/systemd/system-generators/torcx-generator[1373]: time="2024-12-13T14:27:32Z" level=info msg="torcx already run" Dec 13 14:27:32.318855 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:27:32.318870 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:27:32.337535 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:27:32.419867 systemd[1]: Started kubelet.service. Dec 13 14:27:32.421904 systemd[1]: Stopping kubelet.service... Dec 13 14:27:32.424192 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:27:32.424412 systemd[1]: Stopped kubelet.service. Dec 13 14:27:32.426226 systemd[1]: Starting kubelet.service... Dec 13 14:27:32.500164 systemd[1]: Started kubelet.service. Dec 13 14:27:32.624523 kubelet[1422]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:27:32.624523 kubelet[1422]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:27:32.624523 kubelet[1422]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:27:32.736430 kubelet[1422]: I1213 14:27:32.736305 1422 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:27:33.037526 kubelet[1422]: I1213 14:27:33.037434 1422 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 14:27:33.037526 kubelet[1422]: I1213 14:27:33.037463 1422 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:27:33.037710 kubelet[1422]: I1213 14:27:33.037697 1422 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 14:27:33.082141 kubelet[1422]: I1213 14:27:33.082085 1422 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:27:33.145409 kubelet[1422]: I1213 14:27:33.145352 1422 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:27:33.150842 kubelet[1422]: I1213 14:27:33.150796 1422 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:27:33.151017 kubelet[1422]: I1213 14:27:33.150835 1422 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.107","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:27:33.157256 kubelet[1422]: I1213 14:27:33.157226 1422 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:27:33.157256 kubelet[1422]: I1213 14:27:33.157252 1422 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:27:33.157399 kubelet[1422]: I1213 14:27:33.157386 1422 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:27:33.160527 kubelet[1422]: I1213 14:27:33.160503 1422 kubelet.go:400] "Attempting to sync node with API server" Dec 13 14:27:33.160527 kubelet[1422]: I1213 14:27:33.160527 1422 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:27:33.160600 kubelet[1422]: I1213 14:27:33.160552 1422 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:27:33.160600 kubelet[1422]: I1213 14:27:33.160581 1422 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:27:33.160874 kubelet[1422]: E1213 14:27:33.160787 1422 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:33.161028 kubelet[1422]: E1213 14:27:33.160907 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:33.183940 kubelet[1422]: W1213 14:27:33.183880 1422 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.107" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:27:33.184146 kubelet[1422]: E1213 14:27:33.183985 1422 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.107" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:27:33.184146 kubelet[1422]: W1213 14:27:33.184061 1422 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:27:33.184146 kubelet[1422]: E1213 14:27:33.184110 1422 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:27:33.190226 kubelet[1422]: I1213 14:27:33.190163 1422 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:27:33.202608 kubelet[1422]: I1213 14:27:33.202539 1422 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:27:33.202608 kubelet[1422]: W1213 14:27:33.202618 1422 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:27:33.203214 kubelet[1422]: I1213 14:27:33.203190 1422 server.go:1264] "Started kubelet" Dec 13 14:27:33.204031 kubelet[1422]: I1213 14:27:33.203893 1422 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:27:33.204398 kubelet[1422]: I1213 14:27:33.204356 1422 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:27:33.204475 kubelet[1422]: I1213 14:27:33.204422 1422 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:27:33.238869 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:27:33.240027 kubelet[1422]: I1213 14:27:33.239981 1422 server.go:455] "Adding debug handlers to kubelet server" Dec 13 14:27:33.241057 kubelet[1422]: I1213 14:27:33.241026 1422 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:27:33.281798 kubelet[1422]: I1213 14:27:33.281772 1422 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:27:33.281909 kubelet[1422]: I1213 14:27:33.281881 1422 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 14:27:33.281950 kubelet[1422]: I1213 14:27:33.281932 1422 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:27:33.285070 kubelet[1422]: E1213 14:27:33.285038 1422 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:27:33.285329 kubelet[1422]: I1213 14:27:33.285192 1422 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:27:33.285329 kubelet[1422]: I1213 14:27:33.285267 1422 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:27:33.286309 kubelet[1422]: I1213 14:27:33.286292 1422 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:27:33.286530 kubelet[1422]: W1213 14:27:33.286508 1422 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 14:27:33.286577 kubelet[1422]: E1213 14:27:33.286542 1422 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 14:27:33.286671 kubelet[1422]: E1213 14:27:33.286646 1422 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.107\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 14:27:33.356272 kubelet[1422]: E1213 14:27:33.356063 1422 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.107.1810c2cf8c291718 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.107,UID:10.0.0.107,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.107,},FirstTimestamp:2024-12-13 14:27:33.203162904 +0000 UTC m=+0.696245530,LastTimestamp:2024-12-13 14:27:33.203162904 +0000 UTC m=+0.696245530,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.107,}" Dec 13 14:27:33.358439 kubelet[1422]: I1213 14:27:33.358346 1422 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:27:33.358439 kubelet[1422]: I1213 14:27:33.358360 1422 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:27:33.358439 kubelet[1422]: I1213 14:27:33.358402 1422 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:27:33.360436 kubelet[1422]: E1213 14:27:33.360328 1422 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.107.1810c2cf910a39d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.107,UID:10.0.0.107,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.107,},FirstTimestamp:2024-12-13 14:27:33.285026257 +0000 UTC m=+0.778108893,LastTimestamp:2024-12-13 14:27:33.285026257 +0000 UTC m=+0.778108893,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.107,}" Dec 13 14:27:33.363918 kubelet[1422]: E1213 14:27:33.363823 1422 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.107.1810c2cf955a887c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.107,UID:10.0.0.107,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.107 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.107,},FirstTimestamp:2024-12-13 14:27:33.35739814 +0000 UTC m=+0.850480787,LastTimestamp:2024-12-13 14:27:33.35739814 +0000 UTC m=+0.850480787,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.107,}" Dec 13 14:27:33.367479 kubelet[1422]: E1213 14:27:33.367412 1422 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.107.1810c2cf955d315f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.107,UID:10.0.0.107,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.107 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.107,},FirstTimestamp:2024-12-13 14:27:33.357572447 +0000 UTC m=+0.850655083,LastTimestamp:2024-12-13 14:27:33.357572447 +0000 UTC m=+0.850655083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.107,}" Dec 13 14:27:33.370497 kubelet[1422]: E1213 14:27:33.370441 1422 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.107.1810c2cf955d42ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.107,UID:10.0.0.107,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.107 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.107,},FirstTimestamp:2024-12-13 14:27:33.357576876 +0000 UTC m=+0.850659512,LastTimestamp:2024-12-13 14:27:33.357576876 +0000 UTC m=+0.850659512,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.107,}" Dec 13 14:27:33.383394 kubelet[1422]: I1213 14:27:33.383350 1422 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.107" Dec 13 14:27:33.387294 kubelet[1422]: E1213 14:27:33.387244 1422 kubelet_node_status.go:96] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.107" Dec 13 14:27:33.387294 kubelet[1422]: E1213 14:27:33.387197 1422 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.107.1810c2cf955a887c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.107.1810c2cf955a887c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.107,UID:10.0.0.107,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.107 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.107,},FirstTimestamp:2024-12-13 14:27:33.35739814 +0000 UTC m=+0.850480787,LastTimestamp:2024-12-13 14:27:33.383300256 +0000 UTC m=+0.876382892,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.107,}" Dec 13 14:27:33.390469 kubelet[1422]: E1213 14:27:33.390345 1422 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.107.1810c2cf955d315f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.107.1810c2cf955d315f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.107,UID:10.0.0.107,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.107 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.107,},FirstTimestamp:2024-12-13 14:27:33.357572447 +0000 UTC m=+0.850655083,LastTimestamp:2024-12-13 14:27:33.383309564 +0000 UTC m=+0.876392190,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.107,}" Dec 13 14:27:33.398580 kubelet[1422]: E1213 14:27:33.398442 1422 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.107.1810c2cf955d42ac\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.107.1810c2cf955d42ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.107,UID:10.0.0.107,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.107 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.107,},FirstTimestamp:2024-12-13 14:27:33.357576876 +0000 UTC m=+0.850659512,LastTimestamp:2024-12-13 14:27:33.383313691 +0000 UTC m=+0.876396327,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.107,}" Dec 13 14:27:33.491238 kubelet[1422]: E1213 14:27:33.491183 1422 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.107\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Dec 13 14:27:33.589225 kubelet[1422]: I1213 14:27:33.589180 1422 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.107" Dec 13 14:27:34.039305 kubelet[1422]: I1213 14:27:34.039251 1422 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:27:34.039685 kubelet[1422]: E1213 14:27:34.039578 1422 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.100:6443/api/v1/namespaces/default/events\": read tcp 10.0.0.107:41256->10.0.0.100:6443: use of closed network connection" event="&Event{ObjectMeta:{10.0.0.107.1810c2cf955a887c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.107,UID:10.0.0.107,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.107 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.107,},FirstTimestamp:2024-12-13 14:27:33.35739814 +0000 UTC m=+0.850480787,LastTimestamp:2024-12-13 14:27:33.589122697 +0000 UTC m=+1.082205333,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.107,}" Dec 13 14:27:34.161743 kubelet[1422]: E1213 14:27:34.161666 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:34.632863 kubelet[1422]: I1213 14:27:34.632682 1422 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.107" Dec 13 14:27:34.634388 kubelet[1422]: I1213 14:27:34.634301 1422 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:27:34.635009 env[1216]: time="2024-12-13T14:27:34.634939627Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:27:34.635350 kubelet[1422]: I1213 14:27:34.635282 1422 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:27:34.657656 kubelet[1422]: I1213 14:27:34.657594 1422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:27:34.659243 kubelet[1422]: I1213 14:27:34.659188 1422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:27:34.659243 kubelet[1422]: I1213 14:27:34.659246 1422 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:27:34.659440 kubelet[1422]: I1213 14:27:34.659277 1422 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 14:27:34.659440 kubelet[1422]: E1213 14:27:34.659332 1422 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:27:34.659914 kubelet[1422]: I1213 14:27:34.659872 1422 policy_none.go:49] "None policy: Start" Dec 13 14:27:34.660904 kubelet[1422]: I1213 14:27:34.660884 1422 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:27:34.661093 kubelet[1422]: I1213 14:27:34.661081 1422 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:27:34.694469 systemd[1]: Created slice kubepods.slice. Dec 13 14:27:34.698693 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:27:34.711587 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:27:34.713201 kubelet[1422]: I1213 14:27:34.713181 1422 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:27:34.713584 kubelet[1422]: I1213 14:27:34.713543 1422 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:27:34.713879 kubelet[1422]: I1213 14:27:34.713863 1422 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:27:34.715491 sudo[1309]: pam_unix(sudo:session): session closed for user root Dec 13 14:27:34.718578 sshd[1306]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:34.721267 systemd[1]: sshd@4-10.0.0.107:22-10.0.0.1:45092.service: Deactivated successfully. Dec 13 14:27:34.722160 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:27:34.723321 systemd-logind[1208]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:27:34.724081 systemd-logind[1208]: Removed session 5. Dec 13 14:27:35.161955 kubelet[1422]: I1213 14:27:35.161895 1422 apiserver.go:52] "Watching apiserver" Dec 13 14:27:35.161955 kubelet[1422]: E1213 14:27:35.161925 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:35.166380 kubelet[1422]: I1213 14:27:35.166312 1422 topology_manager.go:215] "Topology Admit Handler" podUID="2d5423a1-5f32-40d7-8edd-6c1c172668ff" podNamespace="kube-system" podName="cilium-tvdvv" Dec 13 14:27:35.166540 kubelet[1422]: I1213 14:27:35.166518 1422 topology_manager.go:215] "Topology Admit Handler" podUID="3576b677-342f-4f55-9862-08f61c03bbed" podNamespace="kube-system" podName="kube-proxy-54dw8" Dec 13 14:27:35.171032 systemd[1]: Created slice kubepods-burstable-pod2d5423a1_5f32_40d7_8edd_6c1c172668ff.slice. Dec 13 14:27:35.177746 systemd[1]: Created slice kubepods-besteffort-pod3576b677_342f_4f55_9862_08f61c03bbed.slice. Dec 13 14:27:35.183333 kubelet[1422]: I1213 14:27:35.183295 1422 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 14:27:35.190309 kubelet[1422]: I1213 14:27:35.190274 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-host-proc-sys-net\") pod \"cilium-tvdvv\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " pod="kube-system/cilium-tvdvv" Dec 13 14:27:35.190397 kubelet[1422]: I1213 14:27:35.190310 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-host-proc-sys-kernel\") pod \"cilium-tvdvv\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " pod="kube-system/cilium-tvdvv" Dec 13 14:27:35.190397 kubelet[1422]: I1213 14:27:35.190329 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2d5423a1-5f32-40d7-8edd-6c1c172668ff-hubble-tls\") pod \"cilium-tvdvv\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " pod="kube-system/cilium-tvdvv" Dec 13 14:27:35.190397 kubelet[1422]: I1213 14:27:35.190344 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3576b677-342f-4f55-9862-08f61c03bbed-lib-modules\") pod \"kube-proxy-54dw8\" (UID: \"3576b677-342f-4f55-9862-08f61c03bbed\") " pod="kube-system/kube-proxy-54dw8" Dec 13 14:27:35.190397 kubelet[1422]: I1213 14:27:35.190357 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-cilium-cgroup\") pod \"cilium-tvdvv\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " pod="kube-system/cilium-tvdvv" Dec 13 14:27:35.190397 kubelet[1422]: I1213 14:27:35.190388 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-cni-path\") pod \"cilium-tvdvv\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " pod="kube-system/cilium-tvdvv" Dec 13 14:27:35.190529 kubelet[1422]: I1213 14:27:35.190440 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcb95\" (UniqueName: \"kubernetes.io/projected/2d5423a1-5f32-40d7-8edd-6c1c172668ff-kube-api-access-mcb95\") pod \"cilium-tvdvv\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " pod="kube-system/cilium-tvdvv" Dec 13 14:27:35.190529 kubelet[1422]: I1213 14:27:35.190485 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-etc-cni-netd\") pod \"cilium-tvdvv\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " pod="kube-system/cilium-tvdvv" Dec 13 14:27:35.190529 kubelet[1422]: I1213 14:27:35.190503 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-lib-modules\") pod \"cilium-tvdvv\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " pod="kube-system/cilium-tvdvv" Dec 13 14:27:35.190593 kubelet[1422]: I1213 14:27:35.190535 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2d5423a1-5f32-40d7-8edd-6c1c172668ff-cilium-config-path\") pod \"cilium-tvdvv\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " pod="kube-system/cilium-tvdvv" Dec 13 14:27:35.190593 kubelet[1422]: I1213 14:27:35.190558 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3576b677-342f-4f55-9862-08f61c03bbed-kube-proxy\") pod \"kube-proxy-54dw8\" (UID: \"3576b677-342f-4f55-9862-08f61c03bbed\") " pod="kube-system/kube-proxy-54dw8" Dec 13 14:27:35.190640 kubelet[1422]: I1213 14:27:35.190609 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhnzg\" (UniqueName: \"kubernetes.io/projected/3576b677-342f-4f55-9862-08f61c03bbed-kube-api-access-xhnzg\") pod \"kube-proxy-54dw8\" (UID: \"3576b677-342f-4f55-9862-08f61c03bbed\") " pod="kube-system/kube-proxy-54dw8" Dec 13 14:27:35.190640 kubelet[1422]: I1213 14:27:35.190634 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-xtables-lock\") pod \"cilium-tvdvv\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " pod="kube-system/cilium-tvdvv" Dec 13 14:27:35.190683 kubelet[1422]: I1213 14:27:35.190649 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2d5423a1-5f32-40d7-8edd-6c1c172668ff-clustermesh-secrets\") pod \"cilium-tvdvv\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " pod="kube-system/cilium-tvdvv" Dec 13 14:27:35.190683 kubelet[1422]: I1213 14:27:35.190666 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-hostproc\") pod \"cilium-tvdvv\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " pod="kube-system/cilium-tvdvv" Dec 13 14:27:35.190733 kubelet[1422]: I1213 14:27:35.190688 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3576b677-342f-4f55-9862-08f61c03bbed-xtables-lock\") pod \"kube-proxy-54dw8\" (UID: \"3576b677-342f-4f55-9862-08f61c03bbed\") " pod="kube-system/kube-proxy-54dw8" Dec 13 14:27:35.190733 kubelet[1422]: I1213 14:27:35.190704 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-cilium-run\") pod \"cilium-tvdvv\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " pod="kube-system/cilium-tvdvv" Dec 13 14:27:35.190733 kubelet[1422]: I1213 14:27:35.190717 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-bpf-maps\") pod \"cilium-tvdvv\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " pod="kube-system/cilium-tvdvv" Dec 13 14:27:35.476927 kubelet[1422]: E1213 14:27:35.476789 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:35.477970 env[1216]: time="2024-12-13T14:27:35.477719370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tvdvv,Uid:2d5423a1-5f32-40d7-8edd-6c1c172668ff,Namespace:kube-system,Attempt:0,}" Dec 13 14:27:35.490093 kubelet[1422]: E1213 14:27:35.490050 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:35.490776 env[1216]: time="2024-12-13T14:27:35.490729759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-54dw8,Uid:3576b677-342f-4f55-9862-08f61c03bbed,Namespace:kube-system,Attempt:0,}" Dec 13 14:27:36.162766 kubelet[1422]: E1213 14:27:36.162714 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:36.451304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3741441095.mount: Deactivated successfully. Dec 13 14:27:36.458900 env[1216]: time="2024-12-13T14:27:36.458844570Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:36.459775 env[1216]: time="2024-12-13T14:27:36.459734589Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:36.463168 env[1216]: time="2024-12-13T14:27:36.463114096Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:36.464191 env[1216]: time="2024-12-13T14:27:36.464154648Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:36.466010 env[1216]: time="2024-12-13T14:27:36.465966365Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:36.467205 env[1216]: time="2024-12-13T14:27:36.467176385Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:36.468476 env[1216]: time="2024-12-13T14:27:36.468438962Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:36.469845 env[1216]: time="2024-12-13T14:27:36.469811456Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:36.500446 env[1216]: time="2024-12-13T14:27:36.500356539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:27:36.500619 env[1216]: time="2024-12-13T14:27:36.500460134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:27:36.500619 env[1216]: time="2024-12-13T14:27:36.500510859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:27:36.500819 env[1216]: time="2024-12-13T14:27:36.500786355Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac0c3b603a99b30c5f41f2bd73aebf0e362b161152dc3e0d554af24e47d999f0 pid=1485 runtime=io.containerd.runc.v2 Dec 13 14:27:36.501226 env[1216]: time="2024-12-13T14:27:36.501135200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:27:36.501226 env[1216]: time="2024-12-13T14:27:36.501193489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:27:36.501226 env[1216]: time="2024-12-13T14:27:36.501206994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:27:36.501455 env[1216]: time="2024-12-13T14:27:36.501346225Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad pid=1481 runtime=io.containerd.runc.v2 Dec 13 14:27:36.534512 systemd[1]: Started cri-containerd-ac0c3b603a99b30c5f41f2bd73aebf0e362b161152dc3e0d554af24e47d999f0.scope. Dec 13 14:27:36.536661 systemd[1]: Started cri-containerd-e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad.scope. Dec 13 14:27:36.595823 env[1216]: time="2024-12-13T14:27:36.595780273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tvdvv,Uid:2d5423a1-5f32-40d7-8edd-6c1c172668ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\"" Dec 13 14:27:36.597286 kubelet[1422]: E1213 14:27:36.597252 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:36.598292 env[1216]: time="2024-12-13T14:27:36.598251427Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:27:36.602765 env[1216]: time="2024-12-13T14:27:36.602707644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-54dw8,Uid:3576b677-342f-4f55-9862-08f61c03bbed,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac0c3b603a99b30c5f41f2bd73aebf0e362b161152dc3e0d554af24e47d999f0\"" Dec 13 14:27:36.603300 kubelet[1422]: E1213 14:27:36.603276 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:37.163210 kubelet[1422]: E1213 14:27:37.163142 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:38.163664 kubelet[1422]: E1213 14:27:38.163585 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:39.164168 kubelet[1422]: E1213 14:27:39.164102 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:40.182437 kubelet[1422]: E1213 14:27:40.182349 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:41.183535 kubelet[1422]: E1213 14:27:41.183469 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:42.184582 kubelet[1422]: E1213 14:27:42.184530 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:43.185188 kubelet[1422]: E1213 14:27:43.185145 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:43.991665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2216209625.mount: Deactivated successfully. Dec 13 14:27:44.186172 kubelet[1422]: E1213 14:27:44.186110 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:45.187104 kubelet[1422]: E1213 14:27:45.187010 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:46.188112 kubelet[1422]: E1213 14:27:46.188064 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:47.188468 kubelet[1422]: E1213 14:27:47.188410 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:48.189107 kubelet[1422]: E1213 14:27:48.189026 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:48.818438 env[1216]: time="2024-12-13T14:27:48.818363950Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:48.820459 env[1216]: time="2024-12-13T14:27:48.820409386Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:48.822439 env[1216]: time="2024-12-13T14:27:48.822281226Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:48.822959 env[1216]: time="2024-12-13T14:27:48.822916137Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:27:48.824164 env[1216]: time="2024-12-13T14:27:48.824139902Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 14:27:48.825348 env[1216]: time="2024-12-13T14:27:48.825313042Z" level=info msg="CreateContainer within sandbox \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:27:48.842181 env[1216]: time="2024-12-13T14:27:48.842126834Z" level=info msg="CreateContainer within sandbox \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a4247cd706f502ec74d7a66eaf0d2b78abe4f00a816ef5c3ac6cc422ba433e42\"" Dec 13 14:27:48.842798 env[1216]: time="2024-12-13T14:27:48.842774158Z" level=info msg="StartContainer for \"a4247cd706f502ec74d7a66eaf0d2b78abe4f00a816ef5c3ac6cc422ba433e42\"" Dec 13 14:27:48.861544 systemd[1]: Started cri-containerd-a4247cd706f502ec74d7a66eaf0d2b78abe4f00a816ef5c3ac6cc422ba433e42.scope. Dec 13 14:27:48.891453 env[1216]: time="2024-12-13T14:27:48.891395520Z" level=info msg="StartContainer for \"a4247cd706f502ec74d7a66eaf0d2b78abe4f00a816ef5c3ac6cc422ba433e42\" returns successfully" Dec 13 14:27:48.898499 systemd[1]: cri-containerd-a4247cd706f502ec74d7a66eaf0d2b78abe4f00a816ef5c3ac6cc422ba433e42.scope: Deactivated successfully. Dec 13 14:27:49.189324 kubelet[1422]: E1213 14:27:49.189277 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:49.493773 env[1216]: time="2024-12-13T14:27:49.493642018Z" level=info msg="shim disconnected" id=a4247cd706f502ec74d7a66eaf0d2b78abe4f00a816ef5c3ac6cc422ba433e42 Dec 13 14:27:49.493773 env[1216]: time="2024-12-13T14:27:49.493701950Z" level=warning msg="cleaning up after shim disconnected" id=a4247cd706f502ec74d7a66eaf0d2b78abe4f00a816ef5c3ac6cc422ba433e42 namespace=k8s.io Dec 13 14:27:49.493773 env[1216]: time="2024-12-13T14:27:49.493716527Z" level=info msg="cleaning up dead shim" Dec 13 14:27:49.500530 env[1216]: time="2024-12-13T14:27:49.500477046Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1604 runtime=io.containerd.runc.v2\n" Dec 13 14:27:49.685665 kubelet[1422]: E1213 14:27:49.685631 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:49.687353 env[1216]: time="2024-12-13T14:27:49.687317939Z" level=info msg="CreateContainer within sandbox \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:27:49.704114 env[1216]: time="2024-12-13T14:27:49.704042123Z" level=info msg="CreateContainer within sandbox \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f3ea78208c83990c8cd873bc754d9707c9ad18edd9b6d1f6aaa7537d3731233a\"" Dec 13 14:27:49.704635 env[1216]: time="2024-12-13T14:27:49.704591633Z" level=info msg="StartContainer for \"f3ea78208c83990c8cd873bc754d9707c9ad18edd9b6d1f6aaa7537d3731233a\"" Dec 13 14:27:49.718544 systemd[1]: Started cri-containerd-f3ea78208c83990c8cd873bc754d9707c9ad18edd9b6d1f6aaa7537d3731233a.scope. Dec 13 14:27:49.744883 env[1216]: time="2024-12-13T14:27:49.744755735Z" level=info msg="StartContainer for \"f3ea78208c83990c8cd873bc754d9707c9ad18edd9b6d1f6aaa7537d3731233a\" returns successfully" Dec 13 14:27:49.753658 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:27:49.753851 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:27:49.754035 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:27:49.755362 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:27:49.755637 systemd[1]: cri-containerd-f3ea78208c83990c8cd873bc754d9707c9ad18edd9b6d1f6aaa7537d3731233a.scope: Deactivated successfully. Dec 13 14:27:49.761926 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:27:49.780147 env[1216]: time="2024-12-13T14:27:49.780085361Z" level=info msg="shim disconnected" id=f3ea78208c83990c8cd873bc754d9707c9ad18edd9b6d1f6aaa7537d3731233a Dec 13 14:27:49.780147 env[1216]: time="2024-12-13T14:27:49.780143971Z" level=warning msg="cleaning up after shim disconnected" id=f3ea78208c83990c8cd873bc754d9707c9ad18edd9b6d1f6aaa7537d3731233a namespace=k8s.io Dec 13 14:27:49.780147 env[1216]: time="2024-12-13T14:27:49.780152196Z" level=info msg="cleaning up dead shim" Dec 13 14:27:49.787097 env[1216]: time="2024-12-13T14:27:49.787057396Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1668 runtime=io.containerd.runc.v2\n" Dec 13 14:27:49.836901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4247cd706f502ec74d7a66eaf0d2b78abe4f00a816ef5c3ac6cc422ba433e42-rootfs.mount: Deactivated successfully. Dec 13 14:27:50.189622 kubelet[1422]: E1213 14:27:50.189480 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:50.449835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3325928923.mount: Deactivated successfully. Dec 13 14:27:50.689113 kubelet[1422]: E1213 14:27:50.689080 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:50.691071 env[1216]: time="2024-12-13T14:27:50.691012064Z" level=info msg="CreateContainer within sandbox \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:27:50.757045 env[1216]: time="2024-12-13T14:27:50.756897252Z" level=info msg="CreateContainer within sandbox \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0c07d106675422c0f8eec46812439b13f80cbff2ab60a9c427f584579706f727\"" Dec 13 14:27:50.757753 env[1216]: time="2024-12-13T14:27:50.757695289Z" level=info msg="StartContainer for \"0c07d106675422c0f8eec46812439b13f80cbff2ab60a9c427f584579706f727\"" Dec 13 14:27:50.773903 systemd[1]: Started cri-containerd-0c07d106675422c0f8eec46812439b13f80cbff2ab60a9c427f584579706f727.scope. Dec 13 14:27:50.800744 env[1216]: time="2024-12-13T14:27:50.800693656Z" level=info msg="StartContainer for \"0c07d106675422c0f8eec46812439b13f80cbff2ab60a9c427f584579706f727\" returns successfully" Dec 13 14:27:50.802080 systemd[1]: cri-containerd-0c07d106675422c0f8eec46812439b13f80cbff2ab60a9c427f584579706f727.scope: Deactivated successfully. Dec 13 14:27:51.107055 env[1216]: time="2024-12-13T14:27:51.106902477Z" level=info msg="shim disconnected" id=0c07d106675422c0f8eec46812439b13f80cbff2ab60a9c427f584579706f727 Dec 13 14:27:51.107055 env[1216]: time="2024-12-13T14:27:51.106959794Z" level=warning msg="cleaning up after shim disconnected" id=0c07d106675422c0f8eec46812439b13f80cbff2ab60a9c427f584579706f727 namespace=k8s.io Dec 13 14:27:51.107055 env[1216]: time="2024-12-13T14:27:51.106971186Z" level=info msg="cleaning up dead shim" Dec 13 14:27:51.113791 env[1216]: time="2024-12-13T14:27:51.113749147Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1722 runtime=io.containerd.runc.v2\n" Dec 13 14:27:51.190356 kubelet[1422]: E1213 14:27:51.190305 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:51.271417 env[1216]: time="2024-12-13T14:27:51.271362258Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:51.349537 env[1216]: time="2024-12-13T14:27:51.349475127Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:51.438643 env[1216]: time="2024-12-13T14:27:51.438472519Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:51.482005 env[1216]: time="2024-12-13T14:27:51.481927272Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:51.482513 env[1216]: time="2024-12-13T14:27:51.482468427Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 14:27:51.485043 env[1216]: time="2024-12-13T14:27:51.485021695Z" level=info msg="CreateContainer within sandbox \"ac0c3b603a99b30c5f41f2bd73aebf0e362b161152dc3e0d554af24e47d999f0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:27:51.692213 kubelet[1422]: E1213 14:27:51.692193 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:51.693804 env[1216]: time="2024-12-13T14:27:51.693761477Z" level=info msg="CreateContainer within sandbox \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:27:51.719933 env[1216]: time="2024-12-13T14:27:51.719870531Z" level=info msg="CreateContainer within sandbox \"ac0c3b603a99b30c5f41f2bd73aebf0e362b161152dc3e0d554af24e47d999f0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"50941f28f7a3150793741d87aa5a8e20ea5332835d67210751c3e189b5e85c0c\"" Dec 13 14:27:51.720441 env[1216]: time="2024-12-13T14:27:51.720416134Z" level=info msg="StartContainer for \"50941f28f7a3150793741d87aa5a8e20ea5332835d67210751c3e189b5e85c0c\"" Dec 13 14:27:51.730712 env[1216]: time="2024-12-13T14:27:51.730667258Z" level=info msg="CreateContainer within sandbox \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"838a41d57b0fe973c8cf1f7c2196a050c04bf4cff10dbb268118eec0bbfcd601\"" Dec 13 14:27:51.731345 env[1216]: time="2024-12-13T14:27:51.731324431Z" level=info msg="StartContainer for \"838a41d57b0fe973c8cf1f7c2196a050c04bf4cff10dbb268118eec0bbfcd601\"" Dec 13 14:27:51.738572 systemd[1]: Started cri-containerd-50941f28f7a3150793741d87aa5a8e20ea5332835d67210751c3e189b5e85c0c.scope. Dec 13 14:27:51.744077 systemd[1]: Started cri-containerd-838a41d57b0fe973c8cf1f7c2196a050c04bf4cff10dbb268118eec0bbfcd601.scope. Dec 13 14:27:51.767875 systemd[1]: cri-containerd-838a41d57b0fe973c8cf1f7c2196a050c04bf4cff10dbb268118eec0bbfcd601.scope: Deactivated successfully. Dec 13 14:27:51.773569 env[1216]: time="2024-12-13T14:27:51.773518961Z" level=info msg="StartContainer for \"50941f28f7a3150793741d87aa5a8e20ea5332835d67210751c3e189b5e85c0c\" returns successfully" Dec 13 14:27:51.775259 env[1216]: time="2024-12-13T14:27:51.775213318Z" level=info msg="StartContainer for \"838a41d57b0fe973c8cf1f7c2196a050c04bf4cff10dbb268118eec0bbfcd601\" returns successfully" Dec 13 14:27:51.861633 env[1216]: time="2024-12-13T14:27:51.861572673Z" level=info msg="shim disconnected" id=838a41d57b0fe973c8cf1f7c2196a050c04bf4cff10dbb268118eec0bbfcd601 Dec 13 14:27:51.861633 env[1216]: time="2024-12-13T14:27:51.861628888Z" level=warning msg="cleaning up after shim disconnected" id=838a41d57b0fe973c8cf1f7c2196a050c04bf4cff10dbb268118eec0bbfcd601 namespace=k8s.io Dec 13 14:27:51.861633 env[1216]: time="2024-12-13T14:27:51.861644558Z" level=info msg="cleaning up dead shim" Dec 13 14:27:51.870750 env[1216]: time="2024-12-13T14:27:51.870700391Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1829 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:27:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Dec 13 14:27:52.190958 kubelet[1422]: E1213 14:27:52.190886 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:52.694821 kubelet[1422]: E1213 14:27:52.694773 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:52.700210 kubelet[1422]: E1213 14:27:52.700186 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:52.702157 env[1216]: time="2024-12-13T14:27:52.702100012Z" level=info msg="CreateContainer within sandbox \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:27:52.713621 kubelet[1422]: I1213 14:27:52.713565 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-54dw8" podStartSLOduration=3.83388469 podStartE2EDuration="18.713550004s" podCreationTimestamp="2024-12-13 14:27:34 +0000 UTC" firstStartedPulling="2024-12-13 14:27:36.603829007 +0000 UTC m=+4.096911643" lastFinishedPulling="2024-12-13 14:27:51.483494321 +0000 UTC m=+18.976576957" observedRunningTime="2024-12-13 14:27:52.713388401 +0000 UTC m=+20.206471047" watchObservedRunningTime="2024-12-13 14:27:52.713550004 +0000 UTC m=+20.206632640" Dec 13 14:27:52.729248 env[1216]: time="2024-12-13T14:27:52.729195425Z" level=info msg="CreateContainer within sandbox \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4\"" Dec 13 14:27:52.729807 env[1216]: time="2024-12-13T14:27:52.729772738Z" level=info msg="StartContainer for \"cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4\"" Dec 13 14:27:52.781821 systemd[1]: Started cri-containerd-cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4.scope. Dec 13 14:27:52.822035 env[1216]: time="2024-12-13T14:27:52.821975781Z" level=info msg="StartContainer for \"cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4\" returns successfully" Dec 13 14:27:52.836929 systemd[1]: run-containerd-runc-k8s.io-cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4-runc.MvTYEQ.mount: Deactivated successfully. Dec 13 14:27:52.988473 kubelet[1422]: I1213 14:27:52.988350 1422 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:27:53.160699 kubelet[1422]: E1213 14:27:53.160652 1422 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:53.192063 kubelet[1422]: E1213 14:27:53.192023 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:53.228401 kernel: Initializing XFRM netlink socket Dec 13 14:27:53.703472 kubelet[1422]: E1213 14:27:53.703440 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:53.703798 kubelet[1422]: E1213 14:27:53.703777 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:54.192724 kubelet[1422]: E1213 14:27:54.192685 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:54.704436 kubelet[1422]: E1213 14:27:54.704407 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:55.005237 systemd-networkd[1043]: cilium_host: Link UP Dec 13 14:27:55.005414 systemd-networkd[1043]: cilium_net: Link UP Dec 13 14:27:55.008991 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:27:55.009128 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:27:55.008549 systemd-networkd[1043]: cilium_net: Gained carrier Dec 13 14:27:55.008833 systemd-networkd[1043]: cilium_host: Gained carrier Dec 13 14:27:55.081441 systemd-networkd[1043]: cilium_vxlan: Link UP Dec 13 14:27:55.081449 systemd-networkd[1043]: cilium_vxlan: Gained carrier Dec 13 14:27:55.193225 kubelet[1422]: E1213 14:27:55.193164 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:55.325407 kernel: NET: Registered PF_ALG protocol family Dec 13 14:27:55.455572 systemd-networkd[1043]: cilium_net: Gained IPv6LL Dec 13 14:27:55.631563 systemd-networkd[1043]: cilium_host: Gained IPv6LL Dec 13 14:27:55.707131 kubelet[1422]: E1213 14:27:55.706988 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:55.953140 systemd-networkd[1043]: lxc_health: Link UP Dec 13 14:27:55.966453 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:27:55.966310 systemd-networkd[1043]: lxc_health: Gained carrier Dec 13 14:27:55.980336 kubelet[1422]: I1213 14:27:55.980258 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tvdvv" podStartSLOduration=9.754203975 podStartE2EDuration="21.98023131s" podCreationTimestamp="2024-12-13 14:27:34 +0000 UTC" firstStartedPulling="2024-12-13 14:27:36.597849163 +0000 UTC m=+4.090931799" lastFinishedPulling="2024-12-13 14:27:48.823876498 +0000 UTC m=+16.316959134" observedRunningTime="2024-12-13 14:27:53.719049593 +0000 UTC m=+21.212132249" watchObservedRunningTime="2024-12-13 14:27:55.98023131 +0000 UTC m=+23.473313946" Dec 13 14:27:55.980713 kubelet[1422]: I1213 14:27:55.980665 1422 topology_manager.go:215] "Topology Admit Handler" podUID="4c9fd3e6-9a40-4b5e-b2bc-9c0889c9ca7d" podNamespace="default" podName="nginx-deployment-85f456d6dd-79rz2" Dec 13 14:27:55.987049 systemd[1]: Created slice kubepods-besteffort-pod4c9fd3e6_9a40_4b5e_b2bc_9c0889c9ca7d.slice. Dec 13 14:27:56.032583 kubelet[1422]: I1213 14:27:56.032550 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjnkn\" (UniqueName: \"kubernetes.io/projected/4c9fd3e6-9a40-4b5e-b2bc-9c0889c9ca7d-kube-api-access-gjnkn\") pod \"nginx-deployment-85f456d6dd-79rz2\" (UID: \"4c9fd3e6-9a40-4b5e-b2bc-9c0889c9ca7d\") " pod="default/nginx-deployment-85f456d6dd-79rz2" Dec 13 14:27:56.193615 kubelet[1422]: E1213 14:27:56.193535 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:56.293168 env[1216]: time="2024-12-13T14:27:56.293024017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-79rz2,Uid:4c9fd3e6-9a40-4b5e-b2bc-9c0889c9ca7d,Namespace:default,Attempt:0,}" Dec 13 14:27:56.336484 systemd-networkd[1043]: cilium_vxlan: Gained IPv6LL Dec 13 14:27:56.351055 systemd-networkd[1043]: lxc8e7f6147bcb0: Link UP Dec 13 14:27:56.360392 kernel: eth0: renamed from tmpc41bf Dec 13 14:27:56.366400 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:27:56.366466 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8e7f6147bcb0: link becomes ready Dec 13 14:27:56.367652 systemd-networkd[1043]: lxc8e7f6147bcb0: Gained carrier Dec 13 14:27:57.194412 kubelet[1422]: E1213 14:27:57.194347 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:57.479141 kubelet[1422]: E1213 14:27:57.479030 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:57.710872 kubelet[1422]: E1213 14:27:57.710829 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:58.010567 systemd-networkd[1043]: lxc_health: Gained IPv6LL Dec 13 14:27:58.127515 systemd-networkd[1043]: lxc8e7f6147bcb0: Gained IPv6LL Dec 13 14:27:58.195190 kubelet[1422]: E1213 14:27:58.195128 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:59.195697 kubelet[1422]: E1213 14:27:59.195618 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:00.116084 env[1216]: time="2024-12-13T14:28:00.116007680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:00.116084 env[1216]: time="2024-12-13T14:28:00.116043929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:00.116084 env[1216]: time="2024-12-13T14:28:00.116053768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:00.116499 env[1216]: time="2024-12-13T14:28:00.116170320Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c41bf3dddd9264ba584b547d0d26025909464e7cd3ede70b0d2389f4a90e1d77 pid=2482 runtime=io.containerd.runc.v2 Dec 13 14:28:00.134986 systemd[1]: Started cri-containerd-c41bf3dddd9264ba584b547d0d26025909464e7cd3ede70b0d2389f4a90e1d77.scope. Dec 13 14:28:00.145061 systemd-resolved[1151]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:28:00.172635 env[1216]: time="2024-12-13T14:28:00.172572207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-79rz2,Uid:4c9fd3e6-9a40-4b5e-b2bc-9c0889c9ca7d,Namespace:default,Attempt:0,} returns sandbox id \"c41bf3dddd9264ba584b547d0d26025909464e7cd3ede70b0d2389f4a90e1d77\"" Dec 13 14:28:00.174009 env[1216]: time="2024-12-13T14:28:00.173984728Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:28:00.196185 kubelet[1422]: E1213 14:28:00.196139 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:01.197197 kubelet[1422]: E1213 14:28:01.197135 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:02.197859 kubelet[1422]: E1213 14:28:02.197799 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:03.198972 kubelet[1422]: E1213 14:28:03.198904 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:03.907904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3418441364.mount: Deactivated successfully. Dec 13 14:28:04.199273 kubelet[1422]: E1213 14:28:04.199198 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:05.199554 kubelet[1422]: E1213 14:28:05.199489 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:05.566171 env[1216]: time="2024-12-13T14:28:05.566025454Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:05.568026 env[1216]: time="2024-12-13T14:28:05.567965414Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:05.569736 env[1216]: time="2024-12-13T14:28:05.569713480Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:05.573458 env[1216]: time="2024-12-13T14:28:05.573400854Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:28:05.574403 env[1216]: time="2024-12-13T14:28:05.574346999Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:05.575977 env[1216]: time="2024-12-13T14:28:05.575935492Z" level=info msg="CreateContainer within sandbox \"c41bf3dddd9264ba584b547d0d26025909464e7cd3ede70b0d2389f4a90e1d77\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:28:05.589554 env[1216]: time="2024-12-13T14:28:05.589514720Z" level=info msg="CreateContainer within sandbox \"c41bf3dddd9264ba584b547d0d26025909464e7cd3ede70b0d2389f4a90e1d77\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"2c400f360aee43a2dff1b6995400b2891bf3252f0619cc8b9ae116c0f4d0f96a\"" Dec 13 14:28:05.590050 env[1216]: time="2024-12-13T14:28:05.590008457Z" level=info msg="StartContainer for \"2c400f360aee43a2dff1b6995400b2891bf3252f0619cc8b9ae116c0f4d0f96a\"" Dec 13 14:28:05.606407 systemd[1]: Started cri-containerd-2c400f360aee43a2dff1b6995400b2891bf3252f0619cc8b9ae116c0f4d0f96a.scope. Dec 13 14:28:05.626446 env[1216]: time="2024-12-13T14:28:05.626398032Z" level=info msg="StartContainer for \"2c400f360aee43a2dff1b6995400b2891bf3252f0619cc8b9ae116c0f4d0f96a\" returns successfully" Dec 13 14:28:05.651973 update_engine[1211]: I1213 14:28:05.651913 1211 update_attempter.cc:509] Updating boot flags... Dec 13 14:28:05.735772 kubelet[1422]: I1213 14:28:05.735468 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-79rz2" podStartSLOduration=5.334600019 podStartE2EDuration="10.735451811s" podCreationTimestamp="2024-12-13 14:27:55 +0000 UTC" firstStartedPulling="2024-12-13 14:28:00.173744871 +0000 UTC m=+27.666827507" lastFinishedPulling="2024-12-13 14:28:05.574596663 +0000 UTC m=+33.067679299" observedRunningTime="2024-12-13 14:28:05.734646693 +0000 UTC m=+33.227729319" watchObservedRunningTime="2024-12-13 14:28:05.735451811 +0000 UTC m=+33.228534447" Dec 13 14:28:06.199848 kubelet[1422]: E1213 14:28:06.199813 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:07.200615 kubelet[1422]: E1213 14:28:07.200564 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:08.201121 kubelet[1422]: E1213 14:28:08.201061 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:08.293536 kubelet[1422]: I1213 14:28:08.293499 1422 topology_manager.go:215] "Topology Admit Handler" podUID="b8decea6-4cce-4063-bdc1-4315ac300471" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 14:28:08.298004 systemd[1]: Created slice kubepods-besteffort-podb8decea6_4cce_4063_bdc1_4315ac300471.slice. Dec 13 14:28:08.399317 kubelet[1422]: I1213 14:28:08.399247 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zj5j\" (UniqueName: \"kubernetes.io/projected/b8decea6-4cce-4063-bdc1-4315ac300471-kube-api-access-8zj5j\") pod \"nfs-server-provisioner-0\" (UID: \"b8decea6-4cce-4063-bdc1-4315ac300471\") " pod="default/nfs-server-provisioner-0" Dec 13 14:28:08.399317 kubelet[1422]: I1213 14:28:08.399306 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/b8decea6-4cce-4063-bdc1-4315ac300471-data\") pod \"nfs-server-provisioner-0\" (UID: \"b8decea6-4cce-4063-bdc1-4315ac300471\") " pod="default/nfs-server-provisioner-0" Dec 13 14:28:08.601134 env[1216]: time="2024-12-13T14:28:08.601018756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b8decea6-4cce-4063-bdc1-4315ac300471,Namespace:default,Attempt:0,}" Dec 13 14:28:08.632400 systemd-networkd[1043]: lxcab46ef5c31dd: Link UP Dec 13 14:28:08.640398 kernel: eth0: renamed from tmp15bbc Dec 13 14:28:08.648871 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:28:08.648919 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcab46ef5c31dd: link becomes ready Dec 13 14:28:08.648935 systemd-networkd[1043]: lxcab46ef5c31dd: Gained carrier Dec 13 14:28:08.816986 env[1216]: time="2024-12-13T14:28:08.816904206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:08.816986 env[1216]: time="2024-12-13T14:28:08.816938660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:08.816986 env[1216]: time="2024-12-13T14:28:08.816948789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:08.817238 env[1216]: time="2024-12-13T14:28:08.817057996Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/15bbc39b44a78b0ff20ef3b451408306a5eff9531ee548ac83c8cc15375a72d2 pid=2620 runtime=io.containerd.runc.v2 Dec 13 14:28:08.831212 systemd[1]: Started cri-containerd-15bbc39b44a78b0ff20ef3b451408306a5eff9531ee548ac83c8cc15375a72d2.scope. Dec 13 14:28:08.840345 systemd-resolved[1151]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:28:08.858181 env[1216]: time="2024-12-13T14:28:08.858058814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b8decea6-4cce-4063-bdc1-4315ac300471,Namespace:default,Attempt:0,} returns sandbox id \"15bbc39b44a78b0ff20ef3b451408306a5eff9531ee548ac83c8cc15375a72d2\"" Dec 13 14:28:08.859657 env[1216]: time="2024-12-13T14:28:08.859628866Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:28:09.201847 kubelet[1422]: E1213 14:28:09.201776 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:10.095538 systemd-networkd[1043]: lxcab46ef5c31dd: Gained IPv6LL Dec 13 14:28:10.202837 kubelet[1422]: E1213 14:28:10.202776 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:11.203073 kubelet[1422]: E1213 14:28:11.203004 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:11.744953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1645120255.mount: Deactivated successfully. Dec 13 14:28:12.203126 kubelet[1422]: E1213 14:28:12.203094 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:13.160933 kubelet[1422]: E1213 14:28:13.160874 1422 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:13.203615 kubelet[1422]: E1213 14:28:13.203561 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:13.983949 env[1216]: time="2024-12-13T14:28:13.983887383Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:13.985803 env[1216]: time="2024-12-13T14:28:13.985738929Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:13.987743 env[1216]: time="2024-12-13T14:28:13.987703058Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:13.989395 env[1216]: time="2024-12-13T14:28:13.989341461Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:13.990033 env[1216]: time="2024-12-13T14:28:13.989999784Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 14:28:13.993083 env[1216]: time="2024-12-13T14:28:13.993035576Z" level=info msg="CreateContainer within sandbox \"15bbc39b44a78b0ff20ef3b451408306a5eff9531ee548ac83c8cc15375a72d2\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:28:14.005690 env[1216]: time="2024-12-13T14:28:14.005642247Z" level=info msg="CreateContainer within sandbox \"15bbc39b44a78b0ff20ef3b451408306a5eff9531ee548ac83c8cc15375a72d2\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"ff384dd9a1c671a5fa115e2bc6e0a1d02dc2b7112c2fb8205fde838641ab2786\"" Dec 13 14:28:14.006150 env[1216]: time="2024-12-13T14:28:14.006109950Z" level=info msg="StartContainer for \"ff384dd9a1c671a5fa115e2bc6e0a1d02dc2b7112c2fb8205fde838641ab2786\"" Dec 13 14:28:14.024275 systemd[1]: Started cri-containerd-ff384dd9a1c671a5fa115e2bc6e0a1d02dc2b7112c2fb8205fde838641ab2786.scope. Dec 13 14:28:14.049287 env[1216]: time="2024-12-13T14:28:14.049239461Z" level=info msg="StartContainer for \"ff384dd9a1c671a5fa115e2bc6e0a1d02dc2b7112c2fb8205fde838641ab2786\" returns successfully" Dec 13 14:28:14.204816 kubelet[1422]: E1213 14:28:14.204753 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:14.751159 kubelet[1422]: I1213 14:28:14.751109 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.6191387879999999 podStartE2EDuration="6.751093738s" podCreationTimestamp="2024-12-13 14:28:08 +0000 UTC" firstStartedPulling="2024-12-13 14:28:08.859346462 +0000 UTC m=+36.352429088" lastFinishedPulling="2024-12-13 14:28:13.991301402 +0000 UTC m=+41.484384038" observedRunningTime="2024-12-13 14:28:14.751063491 +0000 UTC m=+42.244146127" watchObservedRunningTime="2024-12-13 14:28:14.751093738 +0000 UTC m=+42.244176364" Dec 13 14:28:15.000626 systemd[1]: run-containerd-runc-k8s.io-ff384dd9a1c671a5fa115e2bc6e0a1d02dc2b7112c2fb8205fde838641ab2786-runc.3rMiXi.mount: Deactivated successfully. Dec 13 14:28:15.205600 kubelet[1422]: E1213 14:28:15.205548 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:16.205736 kubelet[1422]: E1213 14:28:16.205671 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:17.206390 kubelet[1422]: E1213 14:28:17.206293 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:18.207336 kubelet[1422]: E1213 14:28:18.207254 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:19.208177 kubelet[1422]: E1213 14:28:19.208102 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:20.208958 kubelet[1422]: E1213 14:28:20.208903 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:21.209546 kubelet[1422]: E1213 14:28:21.209472 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:22.209653 kubelet[1422]: E1213 14:28:22.209587 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:23.210578 kubelet[1422]: E1213 14:28:23.210513 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:23.530932 kubelet[1422]: I1213 14:28:23.530788 1422 topology_manager.go:215] "Topology Admit Handler" podUID="c4479be5-3891-4194-a320-a79cd39d8ce9" podNamespace="default" podName="test-pod-1" Dec 13 14:28:23.535730 systemd[1]: Created slice kubepods-besteffort-podc4479be5_3891_4194_a320_a79cd39d8ce9.slice. Dec 13 14:28:23.685596 kubelet[1422]: I1213 14:28:23.685550 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5ab23a2b-198d-4709-8946-381ee5dcac68\" (UniqueName: \"kubernetes.io/nfs/c4479be5-3891-4194-a320-a79cd39d8ce9-pvc-5ab23a2b-198d-4709-8946-381ee5dcac68\") pod \"test-pod-1\" (UID: \"c4479be5-3891-4194-a320-a79cd39d8ce9\") " pod="default/test-pod-1" Dec 13 14:28:23.685596 kubelet[1422]: I1213 14:28:23.685596 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhg4t\" (UniqueName: \"kubernetes.io/projected/c4479be5-3891-4194-a320-a79cd39d8ce9-kube-api-access-vhg4t\") pod \"test-pod-1\" (UID: \"c4479be5-3891-4194-a320-a79cd39d8ce9\") " pod="default/test-pod-1" Dec 13 14:28:23.806402 kernel: FS-Cache: Loaded Dec 13 14:28:23.845072 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:28:23.845173 kernel: RPC: Registered udp transport module. Dec 13 14:28:23.845194 kernel: RPC: Registered tcp transport module. Dec 13 14:28:23.845832 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:28:23.901400 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:28:24.081869 kernel: NFS: Registering the id_resolver key type Dec 13 14:28:24.082030 kernel: Key type id_resolver registered Dec 13 14:28:24.082050 kernel: Key type id_legacy registered Dec 13 14:28:24.110007 nfsidmap[2738]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 14:28:24.113012 nfsidmap[2741]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 14:28:24.138590 env[1216]: time="2024-12-13T14:28:24.138522052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c4479be5-3891-4194-a320-a79cd39d8ce9,Namespace:default,Attempt:0,}" Dec 13 14:28:24.165755 systemd-networkd[1043]: lxcb887e8f90984: Link UP Dec 13 14:28:24.177413 kernel: eth0: renamed from tmp434b4 Dec 13 14:28:24.183908 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:28:24.184048 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb887e8f90984: link becomes ready Dec 13 14:28:24.184009 systemd-networkd[1043]: lxcb887e8f90984: Gained carrier Dec 13 14:28:24.211656 kubelet[1422]: E1213 14:28:24.211614 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:24.336575 env[1216]: time="2024-12-13T14:28:24.336489420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:24.336575 env[1216]: time="2024-12-13T14:28:24.336543231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:24.336575 env[1216]: time="2024-12-13T14:28:24.336557337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:24.336833 env[1216]: time="2024-12-13T14:28:24.336781830Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/434b471d97fdc03c54db8004299a53805986ba91a92015dbf53540ba0c57fe5b pid=2775 runtime=io.containerd.runc.v2 Dec 13 14:28:24.347855 systemd[1]: Started cri-containerd-434b471d97fdc03c54db8004299a53805986ba91a92015dbf53540ba0c57fe5b.scope. Dec 13 14:28:24.363165 systemd-resolved[1151]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:28:24.388343 env[1216]: time="2024-12-13T14:28:24.388291664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c4479be5-3891-4194-a320-a79cd39d8ce9,Namespace:default,Attempt:0,} returns sandbox id \"434b471d97fdc03c54db8004299a53805986ba91a92015dbf53540ba0c57fe5b\"" Dec 13 14:28:24.390334 env[1216]: time="2024-12-13T14:28:24.390291567Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:28:25.034109 env[1216]: time="2024-12-13T14:28:25.034049039Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:25.036648 env[1216]: time="2024-12-13T14:28:25.036590892Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:25.039241 env[1216]: time="2024-12-13T14:28:25.039185973Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:25.041637 env[1216]: time="2024-12-13T14:28:25.041591238Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:25.042231 env[1216]: time="2024-12-13T14:28:25.042197709Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:28:25.044510 env[1216]: time="2024-12-13T14:28:25.044478540Z" level=info msg="CreateContainer within sandbox \"434b471d97fdc03c54db8004299a53805986ba91a92015dbf53540ba0c57fe5b\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:28:25.058384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount407243444.mount: Deactivated successfully. Dec 13 14:28:25.060701 env[1216]: time="2024-12-13T14:28:25.060655754Z" level=info msg="CreateContainer within sandbox \"434b471d97fdc03c54db8004299a53805986ba91a92015dbf53540ba0c57fe5b\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"8a40526dcf19e43ae99c244ff657b7f554b471a08e6e4dbf99946c00622866b9\"" Dec 13 14:28:25.061236 env[1216]: time="2024-12-13T14:28:25.061203133Z" level=info msg="StartContainer for \"8a40526dcf19e43ae99c244ff657b7f554b471a08e6e4dbf99946c00622866b9\"" Dec 13 14:28:25.078649 systemd[1]: Started cri-containerd-8a40526dcf19e43ae99c244ff657b7f554b471a08e6e4dbf99946c00622866b9.scope. Dec 13 14:28:25.212163 kubelet[1422]: E1213 14:28:25.212107 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:25.263578 systemd-networkd[1043]: lxcb887e8f90984: Gained IPv6LL Dec 13 14:28:25.369748 env[1216]: time="2024-12-13T14:28:25.368686533Z" level=info msg="StartContainer for \"8a40526dcf19e43ae99c244ff657b7f554b471a08e6e4dbf99946c00622866b9\" returns successfully" Dec 13 14:28:25.776212 kubelet[1422]: I1213 14:28:25.776146 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.122492583 podStartE2EDuration="17.776126816s" podCreationTimestamp="2024-12-13 14:28:08 +0000 UTC" firstStartedPulling="2024-12-13 14:28:24.389665519 +0000 UTC m=+51.882748155" lastFinishedPulling="2024-12-13 14:28:25.043299742 +0000 UTC m=+52.536382388" observedRunningTime="2024-12-13 14:28:25.775830298 +0000 UTC m=+53.268912934" watchObservedRunningTime="2024-12-13 14:28:25.776126816 +0000 UTC m=+53.269209452" Dec 13 14:28:25.796612 systemd[1]: run-containerd-runc-k8s.io-8a40526dcf19e43ae99c244ff657b7f554b471a08e6e4dbf99946c00622866b9-runc.qzgR14.mount: Deactivated successfully. Dec 13 14:28:26.212625 kubelet[1422]: E1213 14:28:26.212552 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:27.213169 kubelet[1422]: E1213 14:28:27.213109 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:28.214231 kubelet[1422]: E1213 14:28:28.214163 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:29.214602 kubelet[1422]: E1213 14:28:29.214537 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:30.214977 kubelet[1422]: E1213 14:28:30.214884 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:31.105575 env[1216]: time="2024-12-13T14:28:31.105478631Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:28:31.114795 env[1216]: time="2024-12-13T14:28:31.114732583Z" level=info msg="StopContainer for \"cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4\" with timeout 2 (s)" Dec 13 14:28:31.118669 env[1216]: time="2024-12-13T14:28:31.118602686Z" level=info msg="Stop container \"cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4\" with signal terminated" Dec 13 14:28:31.125408 systemd-networkd[1043]: lxc_health: Link DOWN Dec 13 14:28:31.125419 systemd-networkd[1043]: lxc_health: Lost carrier Dec 13 14:28:31.167981 systemd[1]: cri-containerd-cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4.scope: Deactivated successfully. Dec 13 14:28:31.168440 systemd[1]: cri-containerd-cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4.scope: Consumed 7.173s CPU time. Dec 13 14:28:31.190105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4-rootfs.mount: Deactivated successfully. Dec 13 14:28:31.206078 env[1216]: time="2024-12-13T14:28:31.206005579Z" level=info msg="shim disconnected" id=cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4 Dec 13 14:28:31.206078 env[1216]: time="2024-12-13T14:28:31.206075460Z" level=warning msg="cleaning up after shim disconnected" id=cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4 namespace=k8s.io Dec 13 14:28:31.206078 env[1216]: time="2024-12-13T14:28:31.206089076Z" level=info msg="cleaning up dead shim" Dec 13 14:28:31.215956 kubelet[1422]: E1213 14:28:31.215890 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:31.216418 env[1216]: time="2024-12-13T14:28:31.216003669Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2904 runtime=io.containerd.runc.v2\n" Dec 13 14:28:31.221906 env[1216]: time="2024-12-13T14:28:31.221815802Z" level=info msg="StopContainer for \"cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4\" returns successfully" Dec 13 14:28:31.222827 env[1216]: time="2024-12-13T14:28:31.222761059Z" level=info msg="StopPodSandbox for \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\"" Dec 13 14:28:31.223039 env[1216]: time="2024-12-13T14:28:31.222859092Z" level=info msg="Container to stop \"0c07d106675422c0f8eec46812439b13f80cbff2ab60a9c427f584579706f727\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:31.223039 env[1216]: time="2024-12-13T14:28:31.222883438Z" level=info msg="Container to stop \"a4247cd706f502ec74d7a66eaf0d2b78abe4f00a816ef5c3ac6cc422ba433e42\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:31.223039 env[1216]: time="2024-12-13T14:28:31.222896854Z" level=info msg="Container to stop \"f3ea78208c83990c8cd873bc754d9707c9ad18edd9b6d1f6aaa7537d3731233a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:31.223039 env[1216]: time="2024-12-13T14:28:31.222909067Z" level=info msg="Container to stop \"838a41d57b0fe973c8cf1f7c2196a050c04bf4cff10dbb268118eec0bbfcd601\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:31.223039 env[1216]: time="2024-12-13T14:28:31.222922853Z" level=info msg="Container to stop \"cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:31.224910 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad-shm.mount: Deactivated successfully. Dec 13 14:28:31.230796 systemd[1]: cri-containerd-e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad.scope: Deactivated successfully. Dec 13 14:28:31.251125 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad-rootfs.mount: Deactivated successfully. Dec 13 14:28:31.257858 env[1216]: time="2024-12-13T14:28:31.257782717Z" level=info msg="shim disconnected" id=e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad Dec 13 14:28:31.257858 env[1216]: time="2024-12-13T14:28:31.257850444Z" level=warning msg="cleaning up after shim disconnected" id=e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad namespace=k8s.io Dec 13 14:28:31.257858 env[1216]: time="2024-12-13T14:28:31.257862016Z" level=info msg="cleaning up dead shim" Dec 13 14:28:31.266546 env[1216]: time="2024-12-13T14:28:31.266484571Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2934 runtime=io.containerd.runc.v2\n" Dec 13 14:28:31.266945 env[1216]: time="2024-12-13T14:28:31.266896024Z" level=info msg="TearDown network for sandbox \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\" successfully" Dec 13 14:28:31.266981 env[1216]: time="2024-12-13T14:28:31.266938946Z" level=info msg="StopPodSandbox for \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\" returns successfully" Dec 13 14:28:31.432957 kubelet[1422]: I1213 14:28:31.432197 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcb95\" (UniqueName: \"kubernetes.io/projected/2d5423a1-5f32-40d7-8edd-6c1c172668ff-kube-api-access-mcb95\") pod \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " Dec 13 14:28:31.432957 kubelet[1422]: I1213 14:28:31.432252 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-cni-path\") pod \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " Dec 13 14:28:31.432957 kubelet[1422]: I1213 14:28:31.432271 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-lib-modules\") pod \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " Dec 13 14:28:31.432957 kubelet[1422]: I1213 14:28:31.432292 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2d5423a1-5f32-40d7-8edd-6c1c172668ff-cilium-config-path\") pod \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " Dec 13 14:28:31.432957 kubelet[1422]: I1213 14:28:31.432351 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-cilium-run\") pod \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " Dec 13 14:28:31.432957 kubelet[1422]: I1213 14:28:31.432405 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-host-proc-sys-net\") pod \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " Dec 13 14:28:31.433265 kubelet[1422]: I1213 14:28:31.432427 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2d5423a1-5f32-40d7-8edd-6c1c172668ff-hubble-tls\") pod \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " Dec 13 14:28:31.433265 kubelet[1422]: I1213 14:28:31.432445 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-host-proc-sys-kernel\") pod \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " Dec 13 14:28:31.433265 kubelet[1422]: I1213 14:28:31.432440 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2d5423a1-5f32-40d7-8edd-6c1c172668ff" (UID: "2d5423a1-5f32-40d7-8edd-6c1c172668ff"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:31.433265 kubelet[1422]: I1213 14:28:31.432495 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2d5423a1-5f32-40d7-8edd-6c1c172668ff" (UID: "2d5423a1-5f32-40d7-8edd-6c1c172668ff"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:31.433265 kubelet[1422]: I1213 14:28:31.432462 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-cilium-cgroup\") pod \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " Dec 13 14:28:31.433413 kubelet[1422]: I1213 14:28:31.432551 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-xtables-lock\") pod \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " Dec 13 14:28:31.433413 kubelet[1422]: I1213 14:28:31.432577 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2d5423a1-5f32-40d7-8edd-6c1c172668ff-clustermesh-secrets\") pod \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " Dec 13 14:28:31.433413 kubelet[1422]: I1213 14:28:31.432591 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-bpf-maps\") pod \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " Dec 13 14:28:31.433413 kubelet[1422]: I1213 14:28:31.432610 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-etc-cni-netd\") pod \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " Dec 13 14:28:31.433413 kubelet[1422]: I1213 14:28:31.432625 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-hostproc\") pod \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\" (UID: \"2d5423a1-5f32-40d7-8edd-6c1c172668ff\") " Dec 13 14:28:31.433413 kubelet[1422]: I1213 14:28:31.432660 1422 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-cilium-cgroup\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:31.433413 kubelet[1422]: I1213 14:28:31.432667 1422 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-lib-modules\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:31.433584 kubelet[1422]: I1213 14:28:31.432685 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-hostproc" (OuterVolumeSpecName: "hostproc") pod "2d5423a1-5f32-40d7-8edd-6c1c172668ff" (UID: "2d5423a1-5f32-40d7-8edd-6c1c172668ff"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:31.433584 kubelet[1422]: I1213 14:28:31.432699 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2d5423a1-5f32-40d7-8edd-6c1c172668ff" (UID: "2d5423a1-5f32-40d7-8edd-6c1c172668ff"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:31.433584 kubelet[1422]: I1213 14:28:31.433013 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-cni-path" (OuterVolumeSpecName: "cni-path") pod "2d5423a1-5f32-40d7-8edd-6c1c172668ff" (UID: "2d5423a1-5f32-40d7-8edd-6c1c172668ff"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:31.433584 kubelet[1422]: I1213 14:28:31.433041 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2d5423a1-5f32-40d7-8edd-6c1c172668ff" (UID: "2d5423a1-5f32-40d7-8edd-6c1c172668ff"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:31.433584 kubelet[1422]: I1213 14:28:31.433056 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2d5423a1-5f32-40d7-8edd-6c1c172668ff" (UID: "2d5423a1-5f32-40d7-8edd-6c1c172668ff"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:31.433721 kubelet[1422]: I1213 14:28:31.433099 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2d5423a1-5f32-40d7-8edd-6c1c172668ff" (UID: "2d5423a1-5f32-40d7-8edd-6c1c172668ff"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:31.433721 kubelet[1422]: I1213 14:28:31.433116 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2d5423a1-5f32-40d7-8edd-6c1c172668ff" (UID: "2d5423a1-5f32-40d7-8edd-6c1c172668ff"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:31.433721 kubelet[1422]: I1213 14:28:31.433132 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2d5423a1-5f32-40d7-8edd-6c1c172668ff" (UID: "2d5423a1-5f32-40d7-8edd-6c1c172668ff"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:31.434863 kubelet[1422]: I1213 14:28:31.434802 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d5423a1-5f32-40d7-8edd-6c1c172668ff-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2d5423a1-5f32-40d7-8edd-6c1c172668ff" (UID: "2d5423a1-5f32-40d7-8edd-6c1c172668ff"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:28:31.435590 kubelet[1422]: I1213 14:28:31.435550 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d5423a1-5f32-40d7-8edd-6c1c172668ff-kube-api-access-mcb95" (OuterVolumeSpecName: "kube-api-access-mcb95") pod "2d5423a1-5f32-40d7-8edd-6c1c172668ff" (UID: "2d5423a1-5f32-40d7-8edd-6c1c172668ff"). InnerVolumeSpecName "kube-api-access-mcb95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:28:31.435794 kubelet[1422]: I1213 14:28:31.435758 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5423a1-5f32-40d7-8edd-6c1c172668ff-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2d5423a1-5f32-40d7-8edd-6c1c172668ff" (UID: "2d5423a1-5f32-40d7-8edd-6c1c172668ff"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:28:31.436867 kubelet[1422]: I1213 14:28:31.436835 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d5423a1-5f32-40d7-8edd-6c1c172668ff-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2d5423a1-5f32-40d7-8edd-6c1c172668ff" (UID: "2d5423a1-5f32-40d7-8edd-6c1c172668ff"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:28:31.437871 systemd[1]: var-lib-kubelet-pods-2d5423a1\x2d5f32\x2d40d7\x2d8edd\x2d6c1c172668ff-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmcb95.mount: Deactivated successfully. Dec 13 14:28:31.437985 systemd[1]: var-lib-kubelet-pods-2d5423a1\x2d5f32\x2d40d7\x2d8edd\x2d6c1c172668ff-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:28:31.438035 systemd[1]: var-lib-kubelet-pods-2d5423a1\x2d5f32\x2d40d7\x2d8edd\x2d6c1c172668ff-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:28:31.533544 kubelet[1422]: I1213 14:28:31.533496 1422 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-host-proc-sys-net\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:31.533544 kubelet[1422]: I1213 14:28:31.533537 1422 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2d5423a1-5f32-40d7-8edd-6c1c172668ff-hubble-tls\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:31.533544 kubelet[1422]: I1213 14:28:31.533547 1422 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-bpf-maps\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:31.533544 kubelet[1422]: I1213 14:28:31.533558 1422 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-host-proc-sys-kernel\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:31.533873 kubelet[1422]: I1213 14:28:31.533568 1422 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-xtables-lock\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:31.533873 kubelet[1422]: I1213 14:28:31.533580 1422 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2d5423a1-5f32-40d7-8edd-6c1c172668ff-clustermesh-secrets\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:31.533873 kubelet[1422]: I1213 14:28:31.533588 1422 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-etc-cni-netd\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:31.533873 kubelet[1422]: I1213 14:28:31.533598 1422 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-hostproc\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:31.533873 kubelet[1422]: I1213 14:28:31.533608 1422 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-cilium-run\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:31.533873 kubelet[1422]: I1213 14:28:31.533616 1422 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mcb95\" (UniqueName: \"kubernetes.io/projected/2d5423a1-5f32-40d7-8edd-6c1c172668ff-kube-api-access-mcb95\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:31.533873 kubelet[1422]: I1213 14:28:31.533625 1422 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2d5423a1-5f32-40d7-8edd-6c1c172668ff-cni-path\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:31.533873 kubelet[1422]: I1213 14:28:31.533635 1422 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2d5423a1-5f32-40d7-8edd-6c1c172668ff-cilium-config-path\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:31.785292 kubelet[1422]: I1213 14:28:31.785256 1422 scope.go:117] "RemoveContainer" containerID="cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4" Dec 13 14:28:31.786806 env[1216]: time="2024-12-13T14:28:31.786764028Z" level=info msg="RemoveContainer for \"cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4\"" Dec 13 14:28:31.789006 systemd[1]: Removed slice kubepods-burstable-pod2d5423a1_5f32_40d7_8edd_6c1c172668ff.slice. Dec 13 14:28:31.789111 systemd[1]: kubepods-burstable-pod2d5423a1_5f32_40d7_8edd_6c1c172668ff.slice: Consumed 7.305s CPU time. Dec 13 14:28:31.795230 env[1216]: time="2024-12-13T14:28:31.795157473Z" level=info msg="RemoveContainer for \"cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4\" returns successfully" Dec 13 14:28:31.795683 kubelet[1422]: I1213 14:28:31.795620 1422 scope.go:117] "RemoveContainer" containerID="838a41d57b0fe973c8cf1f7c2196a050c04bf4cff10dbb268118eec0bbfcd601" Dec 13 14:28:31.797473 env[1216]: time="2024-12-13T14:28:31.797429793Z" level=info msg="RemoveContainer for \"838a41d57b0fe973c8cf1f7c2196a050c04bf4cff10dbb268118eec0bbfcd601\"" Dec 13 14:28:31.801446 env[1216]: time="2024-12-13T14:28:31.801399695Z" level=info msg="RemoveContainer for \"838a41d57b0fe973c8cf1f7c2196a050c04bf4cff10dbb268118eec0bbfcd601\" returns successfully" Dec 13 14:28:31.801696 kubelet[1422]: I1213 14:28:31.801673 1422 scope.go:117] "RemoveContainer" containerID="0c07d106675422c0f8eec46812439b13f80cbff2ab60a9c427f584579706f727" Dec 13 14:28:31.802958 env[1216]: time="2024-12-13T14:28:31.802934800Z" level=info msg="RemoveContainer for \"0c07d106675422c0f8eec46812439b13f80cbff2ab60a9c427f584579706f727\"" Dec 13 14:28:31.808624 env[1216]: time="2024-12-13T14:28:31.808559000Z" level=info msg="RemoveContainer for \"0c07d106675422c0f8eec46812439b13f80cbff2ab60a9c427f584579706f727\" returns successfully" Dec 13 14:28:31.809044 kubelet[1422]: I1213 14:28:31.808982 1422 scope.go:117] "RemoveContainer" containerID="f3ea78208c83990c8cd873bc754d9707c9ad18edd9b6d1f6aaa7537d3731233a" Dec 13 14:28:31.810848 env[1216]: time="2024-12-13T14:28:31.810759255Z" level=info msg="RemoveContainer for \"f3ea78208c83990c8cd873bc754d9707c9ad18edd9b6d1f6aaa7537d3731233a\"" Dec 13 14:28:31.856889 env[1216]: time="2024-12-13T14:28:31.856829959Z" level=info msg="RemoveContainer for \"f3ea78208c83990c8cd873bc754d9707c9ad18edd9b6d1f6aaa7537d3731233a\" returns successfully" Dec 13 14:28:31.857178 kubelet[1422]: I1213 14:28:31.857141 1422 scope.go:117] "RemoveContainer" containerID="a4247cd706f502ec74d7a66eaf0d2b78abe4f00a816ef5c3ac6cc422ba433e42" Dec 13 14:28:31.858474 env[1216]: time="2024-12-13T14:28:31.858439143Z" level=info msg="RemoveContainer for \"a4247cd706f502ec74d7a66eaf0d2b78abe4f00a816ef5c3ac6cc422ba433e42\"" Dec 13 14:28:31.907742 env[1216]: time="2024-12-13T14:28:31.907651632Z" level=info msg="RemoveContainer for \"a4247cd706f502ec74d7a66eaf0d2b78abe4f00a816ef5c3ac6cc422ba433e42\" returns successfully" Dec 13 14:28:31.908068 kubelet[1422]: I1213 14:28:31.908026 1422 scope.go:117] "RemoveContainer" containerID="cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4" Dec 13 14:28:31.908495 env[1216]: time="2024-12-13T14:28:31.908363430Z" level=error msg="ContainerStatus for \"cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4\": not found" Dec 13 14:28:31.908690 kubelet[1422]: E1213 14:28:31.908656 1422 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4\": not found" containerID="cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4" Dec 13 14:28:31.908857 kubelet[1422]: I1213 14:28:31.908705 1422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4"} err="failed to get container status \"cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"cfdc810a4e6f70c7ec174e935fb479ab6035f8abdfdac8b98e27d28ec409d1c4\": not found" Dec 13 14:28:31.908857 kubelet[1422]: I1213 14:28:31.908853 1422 scope.go:117] "RemoveContainer" containerID="838a41d57b0fe973c8cf1f7c2196a050c04bf4cff10dbb268118eec0bbfcd601" Dec 13 14:28:31.909150 env[1216]: time="2024-12-13T14:28:31.909101066Z" level=error msg="ContainerStatus for \"838a41d57b0fe973c8cf1f7c2196a050c04bf4cff10dbb268118eec0bbfcd601\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"838a41d57b0fe973c8cf1f7c2196a050c04bf4cff10dbb268118eec0bbfcd601\": not found" Dec 13 14:28:31.909349 kubelet[1422]: E1213 14:28:31.909314 1422 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"838a41d57b0fe973c8cf1f7c2196a050c04bf4cff10dbb268118eec0bbfcd601\": not found" containerID="838a41d57b0fe973c8cf1f7c2196a050c04bf4cff10dbb268118eec0bbfcd601" Dec 13 14:28:31.909428 kubelet[1422]: I1213 14:28:31.909364 1422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"838a41d57b0fe973c8cf1f7c2196a050c04bf4cff10dbb268118eec0bbfcd601"} err="failed to get container status \"838a41d57b0fe973c8cf1f7c2196a050c04bf4cff10dbb268118eec0bbfcd601\": rpc error: code = NotFound desc = an error occurred when try to find container \"838a41d57b0fe973c8cf1f7c2196a050c04bf4cff10dbb268118eec0bbfcd601\": not found" Dec 13 14:28:31.909428 kubelet[1422]: I1213 14:28:31.909421 1422 scope.go:117] "RemoveContainer" containerID="0c07d106675422c0f8eec46812439b13f80cbff2ab60a9c427f584579706f727" Dec 13 14:28:31.909864 env[1216]: time="2024-12-13T14:28:31.909777236Z" level=error msg="ContainerStatus for \"0c07d106675422c0f8eec46812439b13f80cbff2ab60a9c427f584579706f727\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0c07d106675422c0f8eec46812439b13f80cbff2ab60a9c427f584579706f727\": not found" Dec 13 14:28:31.910134 kubelet[1422]: E1213 14:28:31.910113 1422 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0c07d106675422c0f8eec46812439b13f80cbff2ab60a9c427f584579706f727\": not found" containerID="0c07d106675422c0f8eec46812439b13f80cbff2ab60a9c427f584579706f727" Dec 13 14:28:31.910209 kubelet[1422]: I1213 14:28:31.910133 1422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0c07d106675422c0f8eec46812439b13f80cbff2ab60a9c427f584579706f727"} err="failed to get container status \"0c07d106675422c0f8eec46812439b13f80cbff2ab60a9c427f584579706f727\": rpc error: code = NotFound desc = an error occurred when try to find container \"0c07d106675422c0f8eec46812439b13f80cbff2ab60a9c427f584579706f727\": not found" Dec 13 14:28:31.910209 kubelet[1422]: I1213 14:28:31.910147 1422 scope.go:117] "RemoveContainer" containerID="f3ea78208c83990c8cd873bc754d9707c9ad18edd9b6d1f6aaa7537d3731233a" Dec 13 14:28:31.910360 env[1216]: time="2024-12-13T14:28:31.910311661Z" level=error msg="ContainerStatus for \"f3ea78208c83990c8cd873bc754d9707c9ad18edd9b6d1f6aaa7537d3731233a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3ea78208c83990c8cd873bc754d9707c9ad18edd9b6d1f6aaa7537d3731233a\": not found" Dec 13 14:28:31.910529 kubelet[1422]: E1213 14:28:31.910507 1422 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3ea78208c83990c8cd873bc754d9707c9ad18edd9b6d1f6aaa7537d3731233a\": not found" containerID="f3ea78208c83990c8cd873bc754d9707c9ad18edd9b6d1f6aaa7537d3731233a" Dec 13 14:28:31.910529 kubelet[1422]: I1213 14:28:31.910526 1422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3ea78208c83990c8cd873bc754d9707c9ad18edd9b6d1f6aaa7537d3731233a"} err="failed to get container status \"f3ea78208c83990c8cd873bc754d9707c9ad18edd9b6d1f6aaa7537d3731233a\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3ea78208c83990c8cd873bc754d9707c9ad18edd9b6d1f6aaa7537d3731233a\": not found" Dec 13 14:28:31.910623 kubelet[1422]: I1213 14:28:31.910539 1422 scope.go:117] "RemoveContainer" containerID="a4247cd706f502ec74d7a66eaf0d2b78abe4f00a816ef5c3ac6cc422ba433e42" Dec 13 14:28:31.910753 env[1216]: time="2024-12-13T14:28:31.910701634Z" level=error msg="ContainerStatus for \"a4247cd706f502ec74d7a66eaf0d2b78abe4f00a816ef5c3ac6cc422ba433e42\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4247cd706f502ec74d7a66eaf0d2b78abe4f00a816ef5c3ac6cc422ba433e42\": not found" Dec 13 14:28:31.910950 kubelet[1422]: E1213 14:28:31.910908 1422 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4247cd706f502ec74d7a66eaf0d2b78abe4f00a816ef5c3ac6cc422ba433e42\": not found" containerID="a4247cd706f502ec74d7a66eaf0d2b78abe4f00a816ef5c3ac6cc422ba433e42" Dec 13 14:28:31.911030 kubelet[1422]: I1213 14:28:31.910964 1422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4247cd706f502ec74d7a66eaf0d2b78abe4f00a816ef5c3ac6cc422ba433e42"} err="failed to get container status \"a4247cd706f502ec74d7a66eaf0d2b78abe4f00a816ef5c3ac6cc422ba433e42\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4247cd706f502ec74d7a66eaf0d2b78abe4f00a816ef5c3ac6cc422ba433e42\": not found" Dec 13 14:28:32.216834 kubelet[1422]: E1213 14:28:32.216737 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:32.662581 kubelet[1422]: I1213 14:28:32.662440 1422 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d5423a1-5f32-40d7-8edd-6c1c172668ff" path="/var/lib/kubelet/pods/2d5423a1-5f32-40d7-8edd-6c1c172668ff/volumes" Dec 13 14:28:33.161129 kubelet[1422]: E1213 14:28:33.161049 1422 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:33.217540 kubelet[1422]: E1213 14:28:33.217481 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:33.284934 env[1216]: time="2024-12-13T14:28:33.284880359Z" level=info msg="StopPodSandbox for \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\"" Dec 13 14:28:33.285345 env[1216]: time="2024-12-13T14:28:33.284986387Z" level=info msg="TearDown network for sandbox \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\" successfully" Dec 13 14:28:33.285345 env[1216]: time="2024-12-13T14:28:33.285024148Z" level=info msg="StopPodSandbox for \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\" returns successfully" Dec 13 14:28:33.285442 env[1216]: time="2024-12-13T14:28:33.285413220Z" level=info msg="RemovePodSandbox for \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\"" Dec 13 14:28:33.285475 env[1216]: time="2024-12-13T14:28:33.285439389Z" level=info msg="Forcibly stopping sandbox \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\"" Dec 13 14:28:33.285534 env[1216]: time="2024-12-13T14:28:33.285512115Z" level=info msg="TearDown network for sandbox \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\" successfully" Dec 13 14:28:33.289315 env[1216]: time="2024-12-13T14:28:33.289275596Z" level=info msg="RemovePodSandbox \"e468131f7d6fa4f95f3440e8d2a4cebb283c87ff9829cd25c1ef04013bfe56ad\" returns successfully" Dec 13 14:28:33.546866 kubelet[1422]: I1213 14:28:33.546781 1422 topology_manager.go:215] "Topology Admit Handler" podUID="7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3" podNamespace="kube-system" podName="cilium-h2jzz" Dec 13 14:28:33.547067 kubelet[1422]: E1213 14:28:33.546898 1422 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2d5423a1-5f32-40d7-8edd-6c1c172668ff" containerName="apply-sysctl-overwrites" Dec 13 14:28:33.547067 kubelet[1422]: E1213 14:28:33.546918 1422 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2d5423a1-5f32-40d7-8edd-6c1c172668ff" containerName="mount-bpf-fs" Dec 13 14:28:33.547067 kubelet[1422]: E1213 14:28:33.546925 1422 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2d5423a1-5f32-40d7-8edd-6c1c172668ff" containerName="cilium-agent" Dec 13 14:28:33.547067 kubelet[1422]: E1213 14:28:33.546933 1422 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2d5423a1-5f32-40d7-8edd-6c1c172668ff" containerName="mount-cgroup" Dec 13 14:28:33.547067 kubelet[1422]: E1213 14:28:33.546940 1422 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2d5423a1-5f32-40d7-8edd-6c1c172668ff" containerName="clean-cilium-state" Dec 13 14:28:33.547067 kubelet[1422]: I1213 14:28:33.546971 1422 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5423a1-5f32-40d7-8edd-6c1c172668ff" containerName="cilium-agent" Dec 13 14:28:33.547305 kubelet[1422]: I1213 14:28:33.547276 1422 topology_manager.go:215] "Topology Admit Handler" podUID="e540cc16-5f11-406e-b6aa-b4e4a2778d3c" podNamespace="kube-system" podName="cilium-operator-599987898-twk2c" Dec 13 14:28:33.553135 systemd[1]: Created slice kubepods-burstable-pod7c3c3677_ce71_4f16_8cb1_31a6b91ed4b3.slice. Dec 13 14:28:33.572073 systemd[1]: Created slice kubepods-besteffort-pode540cc16_5f11_406e_b6aa_b4e4a2778d3c.slice. Dec 13 14:28:33.648162 kubelet[1422]: I1213 14:28:33.648116 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-bpf-maps\") pod \"cilium-h2jzz\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " pod="kube-system/cilium-h2jzz" Dec 13 14:28:33.648162 kubelet[1422]: I1213 14:28:33.648167 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-xtables-lock\") pod \"cilium-h2jzz\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " pod="kube-system/cilium-h2jzz" Dec 13 14:28:33.648458 kubelet[1422]: I1213 14:28:33.648199 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-host-proc-sys-net\") pod \"cilium-h2jzz\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " pod="kube-system/cilium-h2jzz" Dec 13 14:28:33.648458 kubelet[1422]: I1213 14:28:33.648225 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-cni-path\") pod \"cilium-h2jzz\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " pod="kube-system/cilium-h2jzz" Dec 13 14:28:33.648458 kubelet[1422]: I1213 14:28:33.648242 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-clustermesh-secrets\") pod \"cilium-h2jzz\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " pod="kube-system/cilium-h2jzz" Dec 13 14:28:33.648458 kubelet[1422]: I1213 14:28:33.648258 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-cilium-config-path\") pod \"cilium-h2jzz\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " pod="kube-system/cilium-h2jzz" Dec 13 14:28:33.648458 kubelet[1422]: I1213 14:28:33.648276 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-cilium-ipsec-secrets\") pod \"cilium-h2jzz\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " pod="kube-system/cilium-h2jzz" Dec 13 14:28:33.648623 kubelet[1422]: I1213 14:28:33.648296 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-host-proc-sys-kernel\") pod \"cilium-h2jzz\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " pod="kube-system/cilium-h2jzz" Dec 13 14:28:33.648623 kubelet[1422]: I1213 14:28:33.648313 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-cilium-run\") pod \"cilium-h2jzz\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " pod="kube-system/cilium-h2jzz" Dec 13 14:28:33.648623 kubelet[1422]: I1213 14:28:33.648330 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-hostproc\") pod \"cilium-h2jzz\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " pod="kube-system/cilium-h2jzz" Dec 13 14:28:33.648623 kubelet[1422]: I1213 14:28:33.648351 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-etc-cni-netd\") pod \"cilium-h2jzz\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " pod="kube-system/cilium-h2jzz" Dec 13 14:28:33.648623 kubelet[1422]: I1213 14:28:33.648389 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-lib-modules\") pod \"cilium-h2jzz\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " pod="kube-system/cilium-h2jzz" Dec 13 14:28:33.648623 kubelet[1422]: I1213 14:28:33.648411 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm2nw\" (UniqueName: \"kubernetes.io/projected/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-kube-api-access-rm2nw\") pod \"cilium-h2jzz\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " pod="kube-system/cilium-h2jzz" Dec 13 14:28:33.648769 kubelet[1422]: I1213 14:28:33.648432 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e540cc16-5f11-406e-b6aa-b4e4a2778d3c-cilium-config-path\") pod \"cilium-operator-599987898-twk2c\" (UID: \"e540cc16-5f11-406e-b6aa-b4e4a2778d3c\") " pod="kube-system/cilium-operator-599987898-twk2c" Dec 13 14:28:33.648769 kubelet[1422]: I1213 14:28:33.648452 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bks8\" (UniqueName: \"kubernetes.io/projected/e540cc16-5f11-406e-b6aa-b4e4a2778d3c-kube-api-access-4bks8\") pod \"cilium-operator-599987898-twk2c\" (UID: \"e540cc16-5f11-406e-b6aa-b4e4a2778d3c\") " pod="kube-system/cilium-operator-599987898-twk2c" Dec 13 14:28:33.648769 kubelet[1422]: I1213 14:28:33.648469 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-cilium-cgroup\") pod \"cilium-h2jzz\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " pod="kube-system/cilium-h2jzz" Dec 13 14:28:33.648769 kubelet[1422]: I1213 14:28:33.648493 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-hubble-tls\") pod \"cilium-h2jzz\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " pod="kube-system/cilium-h2jzz" Dec 13 14:28:33.705765 kubelet[1422]: E1213 14:28:33.705700 1422 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-rm2nw lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-h2jzz" podUID="7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3" Dec 13 14:28:33.874467 kubelet[1422]: E1213 14:28:33.874308 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:33.874966 env[1216]: time="2024-12-13T14:28:33.874914865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-twk2c,Uid:e540cc16-5f11-406e-b6aa-b4e4a2778d3c,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:33.888892 env[1216]: time="2024-12-13T14:28:33.888795786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:33.888892 env[1216]: time="2024-12-13T14:28:33.888846390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:33.888892 env[1216]: time="2024-12-13T14:28:33.888864915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:33.889480 env[1216]: time="2024-12-13T14:28:33.889342132Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a67907bfa3ef713d53fc54d019b8e9753839b8432e8eda02e7d6c6bf498beef pid=2963 runtime=io.containerd.runc.v2 Dec 13 14:28:33.901869 systemd[1]: Started cri-containerd-8a67907bfa3ef713d53fc54d019b8e9753839b8432e8eda02e7d6c6bf498beef.scope. Dec 13 14:28:33.938402 env[1216]: time="2024-12-13T14:28:33.937407932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-twk2c,Uid:e540cc16-5f11-406e-b6aa-b4e4a2778d3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a67907bfa3ef713d53fc54d019b8e9753839b8432e8eda02e7d6c6bf498beef\"" Dec 13 14:28:33.938540 kubelet[1422]: E1213 14:28:33.938131 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:33.939260 env[1216]: time="2024-12-13T14:28:33.939235165Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:28:33.950023 kubelet[1422]: I1213 14:28:33.949991 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-etc-cni-netd\") pod \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " Dec 13 14:28:33.950137 kubelet[1422]: I1213 14:28:33.950029 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-cilium-cgroup\") pod \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " Dec 13 14:28:33.950137 kubelet[1422]: I1213 14:28:33.950064 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-hubble-tls\") pod \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " Dec 13 14:28:33.950137 kubelet[1422]: I1213 14:28:33.950083 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-bpf-maps\") pod \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " Dec 13 14:28:33.950137 kubelet[1422]: I1213 14:28:33.950077 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3" (UID: "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:33.950137 kubelet[1422]: I1213 14:28:33.950102 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-cni-path\") pod \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " Dec 13 14:28:33.950137 kubelet[1422]: I1213 14:28:33.950119 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-hostproc\") pod \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " Dec 13 14:28:33.950392 kubelet[1422]: I1213 14:28:33.950140 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-cilium-ipsec-secrets\") pod \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " Dec 13 14:28:33.950392 kubelet[1422]: I1213 14:28:33.950161 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-cilium-run\") pod \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " Dec 13 14:28:33.950392 kubelet[1422]: I1213 14:28:33.950179 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-lib-modules\") pod \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " Dec 13 14:28:33.950392 kubelet[1422]: I1213 14:28:33.950197 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-xtables-lock\") pod \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " Dec 13 14:28:33.950392 kubelet[1422]: I1213 14:28:33.950218 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-cilium-config-path\") pod \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " Dec 13 14:28:33.950392 kubelet[1422]: I1213 14:28:33.950238 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm2nw\" (UniqueName: \"kubernetes.io/projected/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-kube-api-access-rm2nw\") pod \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " Dec 13 14:28:33.950652 kubelet[1422]: I1213 14:28:33.950263 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-clustermesh-secrets\") pod \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " Dec 13 14:28:33.950652 kubelet[1422]: I1213 14:28:33.950281 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-host-proc-sys-net\") pod \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " Dec 13 14:28:33.950652 kubelet[1422]: I1213 14:28:33.950299 1422 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-host-proc-sys-kernel\") pod \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\" (UID: \"7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3\") " Dec 13 14:28:33.950652 kubelet[1422]: I1213 14:28:33.950336 1422 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-etc-cni-netd\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:33.950652 kubelet[1422]: I1213 14:28:33.950139 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3" (UID: "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:33.950652 kubelet[1422]: I1213 14:28:33.950155 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-cni-path" (OuterVolumeSpecName: "cni-path") pod "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3" (UID: "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:33.950848 kubelet[1422]: I1213 14:28:33.950395 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3" (UID: "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:33.950848 kubelet[1422]: I1213 14:28:33.950438 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3" (UID: "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:33.950848 kubelet[1422]: I1213 14:28:33.950459 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3" (UID: "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:33.950848 kubelet[1422]: I1213 14:28:33.950467 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3" (UID: "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:33.950848 kubelet[1422]: I1213 14:28:33.950480 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3" (UID: "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:33.952571 kubelet[1422]: I1213 14:28:33.951154 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-hostproc" (OuterVolumeSpecName: "hostproc") pod "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3" (UID: "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:33.952571 kubelet[1422]: I1213 14:28:33.951185 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3" (UID: "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:33.952824 kubelet[1422]: I1213 14:28:33.952800 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-kube-api-access-rm2nw" (OuterVolumeSpecName: "kube-api-access-rm2nw") pod "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3" (UID: "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3"). InnerVolumeSpecName "kube-api-access-rm2nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:28:33.952955 kubelet[1422]: I1213 14:28:33.952929 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3" (UID: "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:28:33.953087 kubelet[1422]: I1213 14:28:33.953025 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3" (UID: "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:28:33.953220 kubelet[1422]: I1213 14:28:33.953198 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3" (UID: "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:28:33.954702 kubelet[1422]: I1213 14:28:33.954670 1422 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3" (UID: "7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:28:34.051350 kubelet[1422]: I1213 14:28:34.051303 1422 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-host-proc-sys-kernel\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:34.051350 kubelet[1422]: I1213 14:28:34.051345 1422 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rm2nw\" (UniqueName: \"kubernetes.io/projected/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-kube-api-access-rm2nw\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:34.051350 kubelet[1422]: I1213 14:28:34.051361 1422 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-clustermesh-secrets\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:34.051595 kubelet[1422]: I1213 14:28:34.051386 1422 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-host-proc-sys-net\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:34.051595 kubelet[1422]: I1213 14:28:34.051399 1422 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-cilium-cgroup\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:34.051595 kubelet[1422]: I1213 14:28:34.051409 1422 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-hubble-tls\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:34.051595 kubelet[1422]: I1213 14:28:34.051420 1422 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-bpf-maps\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:34.051595 kubelet[1422]: I1213 14:28:34.051432 1422 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-cni-path\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:34.051595 kubelet[1422]: I1213 14:28:34.051441 1422 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-hostproc\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:34.051595 kubelet[1422]: I1213 14:28:34.051451 1422 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-cilium-ipsec-secrets\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:34.051595 kubelet[1422]: I1213 14:28:34.051461 1422 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-cilium-run\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:34.051755 kubelet[1422]: I1213 14:28:34.051471 1422 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-lib-modules\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:34.051755 kubelet[1422]: I1213 14:28:34.051481 1422 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-xtables-lock\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:34.051755 kubelet[1422]: I1213 14:28:34.051490 1422 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3-cilium-config-path\") on node \"10.0.0.107\" DevicePath \"\"" Dec 13 14:28:34.217960 kubelet[1422]: E1213 14:28:34.217897 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:34.664782 systemd[1]: Removed slice kubepods-burstable-pod7c3c3677_ce71_4f16_8cb1_31a6b91ed4b3.slice. Dec 13 14:28:34.728025 kubelet[1422]: E1213 14:28:34.727978 1422 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:28:34.755241 systemd[1]: var-lib-kubelet-pods-7c3c3677\x2dce71\x2d4f16\x2d8cb1\x2d31a6b91ed4b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drm2nw.mount: Deactivated successfully. Dec 13 14:28:34.755320 systemd[1]: var-lib-kubelet-pods-7c3c3677\x2dce71\x2d4f16\x2d8cb1\x2d31a6b91ed4b3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:28:34.755384 systemd[1]: var-lib-kubelet-pods-7c3c3677\x2dce71\x2d4f16\x2d8cb1\x2d31a6b91ed4b3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:28:34.755434 systemd[1]: var-lib-kubelet-pods-7c3c3677\x2dce71\x2d4f16\x2d8cb1\x2d31a6b91ed4b3-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:28:34.924133 kubelet[1422]: I1213 14:28:34.923989 1422 topology_manager.go:215] "Topology Admit Handler" podUID="567f1faf-10ca-49d2-91be-23e166326d68" podNamespace="kube-system" podName="cilium-fksmj" Dec 13 14:28:34.930664 systemd[1]: Created slice kubepods-burstable-pod567f1faf_10ca_49d2_91be_23e166326d68.slice. Dec 13 14:28:35.056936 kubelet[1422]: I1213 14:28:35.056855 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/567f1faf-10ca-49d2-91be-23e166326d68-hostproc\") pod \"cilium-fksmj\" (UID: \"567f1faf-10ca-49d2-91be-23e166326d68\") " pod="kube-system/cilium-fksmj" Dec 13 14:28:35.056936 kubelet[1422]: I1213 14:28:35.056923 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/567f1faf-10ca-49d2-91be-23e166326d68-cni-path\") pod \"cilium-fksmj\" (UID: \"567f1faf-10ca-49d2-91be-23e166326d68\") " pod="kube-system/cilium-fksmj" Dec 13 14:28:35.057185 kubelet[1422]: I1213 14:28:35.056956 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/567f1faf-10ca-49d2-91be-23e166326d68-host-proc-sys-net\") pod \"cilium-fksmj\" (UID: \"567f1faf-10ca-49d2-91be-23e166326d68\") " pod="kube-system/cilium-fksmj" Dec 13 14:28:35.057185 kubelet[1422]: I1213 14:28:35.056978 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/567f1faf-10ca-49d2-91be-23e166326d68-cilium-run\") pod \"cilium-fksmj\" (UID: \"567f1faf-10ca-49d2-91be-23e166326d68\") " pod="kube-system/cilium-fksmj" Dec 13 14:28:35.057185 kubelet[1422]: I1213 14:28:35.056997 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/567f1faf-10ca-49d2-91be-23e166326d68-cilium-config-path\") pod \"cilium-fksmj\" (UID: \"567f1faf-10ca-49d2-91be-23e166326d68\") " pod="kube-system/cilium-fksmj" Dec 13 14:28:35.057185 kubelet[1422]: I1213 14:28:35.057018 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/567f1faf-10ca-49d2-91be-23e166326d68-hubble-tls\") pod \"cilium-fksmj\" (UID: \"567f1faf-10ca-49d2-91be-23e166326d68\") " pod="kube-system/cilium-fksmj" Dec 13 14:28:35.057185 kubelet[1422]: I1213 14:28:35.057037 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/567f1faf-10ca-49d2-91be-23e166326d68-lib-modules\") pod \"cilium-fksmj\" (UID: \"567f1faf-10ca-49d2-91be-23e166326d68\") " pod="kube-system/cilium-fksmj" Dec 13 14:28:35.057185 kubelet[1422]: I1213 14:28:35.057053 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/567f1faf-10ca-49d2-91be-23e166326d68-xtables-lock\") pod \"cilium-fksmj\" (UID: \"567f1faf-10ca-49d2-91be-23e166326d68\") " pod="kube-system/cilium-fksmj" Dec 13 14:28:35.057353 kubelet[1422]: I1213 14:28:35.057079 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/567f1faf-10ca-49d2-91be-23e166326d68-host-proc-sys-kernel\") pod \"cilium-fksmj\" (UID: \"567f1faf-10ca-49d2-91be-23e166326d68\") " pod="kube-system/cilium-fksmj" Dec 13 14:28:35.057353 kubelet[1422]: I1213 14:28:35.057119 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/567f1faf-10ca-49d2-91be-23e166326d68-bpf-maps\") pod \"cilium-fksmj\" (UID: \"567f1faf-10ca-49d2-91be-23e166326d68\") " pod="kube-system/cilium-fksmj" Dec 13 14:28:35.057353 kubelet[1422]: I1213 14:28:35.057139 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/567f1faf-10ca-49d2-91be-23e166326d68-cilium-cgroup\") pod \"cilium-fksmj\" (UID: \"567f1faf-10ca-49d2-91be-23e166326d68\") " pod="kube-system/cilium-fksmj" Dec 13 14:28:35.057353 kubelet[1422]: I1213 14:28:35.057161 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/567f1faf-10ca-49d2-91be-23e166326d68-etc-cni-netd\") pod \"cilium-fksmj\" (UID: \"567f1faf-10ca-49d2-91be-23e166326d68\") " pod="kube-system/cilium-fksmj" Dec 13 14:28:35.057353 kubelet[1422]: I1213 14:28:35.057181 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/567f1faf-10ca-49d2-91be-23e166326d68-clustermesh-secrets\") pod \"cilium-fksmj\" (UID: \"567f1faf-10ca-49d2-91be-23e166326d68\") " pod="kube-system/cilium-fksmj" Dec 13 14:28:35.057353 kubelet[1422]: I1213 14:28:35.057200 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/567f1faf-10ca-49d2-91be-23e166326d68-cilium-ipsec-secrets\") pod \"cilium-fksmj\" (UID: \"567f1faf-10ca-49d2-91be-23e166326d68\") " pod="kube-system/cilium-fksmj" Dec 13 14:28:35.057533 kubelet[1422]: I1213 14:28:35.057232 1422 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dlv9\" (UniqueName: \"kubernetes.io/projected/567f1faf-10ca-49d2-91be-23e166326d68-kube-api-access-7dlv9\") pod \"cilium-fksmj\" (UID: \"567f1faf-10ca-49d2-91be-23e166326d68\") " pod="kube-system/cilium-fksmj" Dec 13 14:28:35.218791 kubelet[1422]: E1213 14:28:35.218701 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:35.238017 kubelet[1422]: E1213 14:28:35.237959 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:35.238727 env[1216]: time="2024-12-13T14:28:35.238615677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fksmj,Uid:567f1faf-10ca-49d2-91be-23e166326d68,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:35.254955 env[1216]: time="2024-12-13T14:28:35.254836447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:35.254955 env[1216]: time="2024-12-13T14:28:35.254910105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:35.254955 env[1216]: time="2024-12-13T14:28:35.254929802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:35.255245 env[1216]: time="2024-12-13T14:28:35.255168581Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb93e253990ce6e0743cf8eac06456815dfbd5bcd6a31cc90c79fa76f4232c5f pid=3014 runtime=io.containerd.runc.v2 Dec 13 14:28:35.268425 systemd[1]: Started cri-containerd-fb93e253990ce6e0743cf8eac06456815dfbd5bcd6a31cc90c79fa76f4232c5f.scope. Dec 13 14:28:35.290808 env[1216]: time="2024-12-13T14:28:35.290758475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fksmj,Uid:567f1faf-10ca-49d2-91be-23e166326d68,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb93e253990ce6e0743cf8eac06456815dfbd5bcd6a31cc90c79fa76f4232c5f\"" Dec 13 14:28:35.291769 kubelet[1422]: E1213 14:28:35.291734 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:35.293629 env[1216]: time="2024-12-13T14:28:35.293590224Z" level=info msg="CreateContainer within sandbox \"fb93e253990ce6e0743cf8eac06456815dfbd5bcd6a31cc90c79fa76f4232c5f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:28:35.310209 env[1216]: time="2024-12-13T14:28:35.310129974Z" level=info msg="CreateContainer within sandbox \"fb93e253990ce6e0743cf8eac06456815dfbd5bcd6a31cc90c79fa76f4232c5f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f9097767ea9046ff66ad9071d17044c9613ec90f91fde8257336a51b4a2f4ee2\"" Dec 13 14:28:35.311068 env[1216]: time="2024-12-13T14:28:35.311021939Z" level=info msg="StartContainer for \"f9097767ea9046ff66ad9071d17044c9613ec90f91fde8257336a51b4a2f4ee2\"" Dec 13 14:28:35.326972 systemd[1]: Started cri-containerd-f9097767ea9046ff66ad9071d17044c9613ec90f91fde8257336a51b4a2f4ee2.scope. Dec 13 14:28:35.351872 env[1216]: time="2024-12-13T14:28:35.351806030Z" level=info msg="StartContainer for \"f9097767ea9046ff66ad9071d17044c9613ec90f91fde8257336a51b4a2f4ee2\" returns successfully" Dec 13 14:28:35.364020 systemd[1]: cri-containerd-f9097767ea9046ff66ad9071d17044c9613ec90f91fde8257336a51b4a2f4ee2.scope: Deactivated successfully. Dec 13 14:28:35.396389 env[1216]: time="2024-12-13T14:28:35.396315219Z" level=info msg="shim disconnected" id=f9097767ea9046ff66ad9071d17044c9613ec90f91fde8257336a51b4a2f4ee2 Dec 13 14:28:35.396662 env[1216]: time="2024-12-13T14:28:35.396619761Z" level=warning msg="cleaning up after shim disconnected" id=f9097767ea9046ff66ad9071d17044c9613ec90f91fde8257336a51b4a2f4ee2 namespace=k8s.io Dec 13 14:28:35.396662 env[1216]: time="2024-12-13T14:28:35.396641212Z" level=info msg="cleaning up dead shim" Dec 13 14:28:35.404319 env[1216]: time="2024-12-13T14:28:35.404243712Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3100 runtime=io.containerd.runc.v2\n" Dec 13 14:28:35.796578 kubelet[1422]: E1213 14:28:35.796536 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:35.799196 env[1216]: time="2024-12-13T14:28:35.799135079Z" level=info msg="CreateContainer within sandbox \"fb93e253990ce6e0743cf8eac06456815dfbd5bcd6a31cc90c79fa76f4232c5f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:28:35.817547 env[1216]: time="2024-12-13T14:28:35.817483516Z" level=info msg="CreateContainer within sandbox \"fb93e253990ce6e0743cf8eac06456815dfbd5bcd6a31cc90c79fa76f4232c5f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3db175674fe98a05e9ad4c7fa552d90bbb79e00c48c017f15e7eda741cc8330d\"" Dec 13 14:28:35.818187 env[1216]: time="2024-12-13T14:28:35.818130040Z" level=info msg="StartContainer for \"3db175674fe98a05e9ad4c7fa552d90bbb79e00c48c017f15e7eda741cc8330d\"" Dec 13 14:28:35.836480 systemd[1]: Started cri-containerd-3db175674fe98a05e9ad4c7fa552d90bbb79e00c48c017f15e7eda741cc8330d.scope. Dec 13 14:28:35.862836 env[1216]: time="2024-12-13T14:28:35.862783901Z" level=info msg="StartContainer for \"3db175674fe98a05e9ad4c7fa552d90bbb79e00c48c017f15e7eda741cc8330d\" returns successfully" Dec 13 14:28:35.866620 systemd[1]: cri-containerd-3db175674fe98a05e9ad4c7fa552d90bbb79e00c48c017f15e7eda741cc8330d.scope: Deactivated successfully. Dec 13 14:28:35.912205 env[1216]: time="2024-12-13T14:28:35.912135465Z" level=info msg="shim disconnected" id=3db175674fe98a05e9ad4c7fa552d90bbb79e00c48c017f15e7eda741cc8330d Dec 13 14:28:35.912205 env[1216]: time="2024-12-13T14:28:35.912199405Z" level=warning msg="cleaning up after shim disconnected" id=3db175674fe98a05e9ad4c7fa552d90bbb79e00c48c017f15e7eda741cc8330d namespace=k8s.io Dec 13 14:28:35.912205 env[1216]: time="2024-12-13T14:28:35.912215756Z" level=info msg="cleaning up dead shim" Dec 13 14:28:35.919451 env[1216]: time="2024-12-13T14:28:35.919391505Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3162 runtime=io.containerd.runc.v2\n" Dec 13 14:28:36.171642 kubelet[1422]: I1213 14:28:36.171485 1422 setters.go:580] "Node became not ready" node="10.0.0.107" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:28:36Z","lastTransitionTime":"2024-12-13T14:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:28:36.219816 kubelet[1422]: E1213 14:28:36.219753 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:36.401533 env[1216]: time="2024-12-13T14:28:36.401462067Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:36.403378 env[1216]: time="2024-12-13T14:28:36.403332050Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:36.404936 env[1216]: time="2024-12-13T14:28:36.404905755Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:36.405339 env[1216]: time="2024-12-13T14:28:36.405311206Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:28:36.407849 env[1216]: time="2024-12-13T14:28:36.407808528Z" level=info msg="CreateContainer within sandbox \"8a67907bfa3ef713d53fc54d019b8e9753839b8432e8eda02e7d6c6bf498beef\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:28:36.419487 env[1216]: time="2024-12-13T14:28:36.419430577Z" level=info msg="CreateContainer within sandbox \"8a67907bfa3ef713d53fc54d019b8e9753839b8432e8eda02e7d6c6bf498beef\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3b2f10f805d7a01ea9b5873d3286d4efa59cd3581c1961a57b304ac4257ded08\"" Dec 13 14:28:36.419952 env[1216]: time="2024-12-13T14:28:36.419919024Z" level=info msg="StartContainer for \"3b2f10f805d7a01ea9b5873d3286d4efa59cd3581c1961a57b304ac4257ded08\"" Dec 13 14:28:36.433392 systemd[1]: Started cri-containerd-3b2f10f805d7a01ea9b5873d3286d4efa59cd3581c1961a57b304ac4257ded08.scope. Dec 13 14:28:36.458266 env[1216]: time="2024-12-13T14:28:36.458213241Z" level=info msg="StartContainer for \"3b2f10f805d7a01ea9b5873d3286d4efa59cd3581c1961a57b304ac4257ded08\" returns successfully" Dec 13 14:28:36.662782 kubelet[1422]: I1213 14:28:36.662725 1422 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3" path="/var/lib/kubelet/pods/7c3c3677-ce71-4f16-8cb1-31a6b91ed4b3/volumes" Dec 13 14:28:36.755925 systemd[1]: run-containerd-runc-k8s.io-3db175674fe98a05e9ad4c7fa552d90bbb79e00c48c017f15e7eda741cc8330d-runc.7MGlCt.mount: Deactivated successfully. Dec 13 14:28:36.756032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3db175674fe98a05e9ad4c7fa552d90bbb79e00c48c017f15e7eda741cc8330d-rootfs.mount: Deactivated successfully. Dec 13 14:28:36.799601 kubelet[1422]: E1213 14:28:36.799567 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:36.801201 kubelet[1422]: E1213 14:28:36.801159 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:36.802925 env[1216]: time="2024-12-13T14:28:36.802883201Z" level=info msg="CreateContainer within sandbox \"fb93e253990ce6e0743cf8eac06456815dfbd5bcd6a31cc90c79fa76f4232c5f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:28:36.808411 kubelet[1422]: I1213 14:28:36.808343 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-twk2c" podStartSLOduration=1.341043407 podStartE2EDuration="3.808327147s" podCreationTimestamp="2024-12-13 14:28:33 +0000 UTC" firstStartedPulling="2024-12-13 14:28:33.938992529 +0000 UTC m=+61.432075165" lastFinishedPulling="2024-12-13 14:28:36.406276269 +0000 UTC m=+63.899358905" observedRunningTime="2024-12-13 14:28:36.808251886 +0000 UTC m=+64.301334522" watchObservedRunningTime="2024-12-13 14:28:36.808327147 +0000 UTC m=+64.301409783" Dec 13 14:28:36.903840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3143865428.mount: Deactivated successfully. Dec 13 14:28:36.906638 env[1216]: time="2024-12-13T14:28:36.906593316Z" level=info msg="CreateContainer within sandbox \"fb93e253990ce6e0743cf8eac06456815dfbd5bcd6a31cc90c79fa76f4232c5f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3887dfc5081d520e32b3db2665cef71365cd936699e7f82fc1f53203c1e41a4f\"" Dec 13 14:28:36.907184 env[1216]: time="2024-12-13T14:28:36.907152457Z" level=info msg="StartContainer for \"3887dfc5081d520e32b3db2665cef71365cd936699e7f82fc1f53203c1e41a4f\"" Dec 13 14:28:36.925101 systemd[1]: Started cri-containerd-3887dfc5081d520e32b3db2665cef71365cd936699e7f82fc1f53203c1e41a4f.scope. Dec 13 14:28:36.955337 systemd[1]: cri-containerd-3887dfc5081d520e32b3db2665cef71365cd936699e7f82fc1f53203c1e41a4f.scope: Deactivated successfully. Dec 13 14:28:37.001724 env[1216]: time="2024-12-13T14:28:37.001640531Z" level=info msg="StartContainer for \"3887dfc5081d520e32b3db2665cef71365cd936699e7f82fc1f53203c1e41a4f\" returns successfully" Dec 13 14:28:37.024233 env[1216]: time="2024-12-13T14:28:37.024093629Z" level=info msg="shim disconnected" id=3887dfc5081d520e32b3db2665cef71365cd936699e7f82fc1f53203c1e41a4f Dec 13 14:28:37.024233 env[1216]: time="2024-12-13T14:28:37.024162810Z" level=warning msg="cleaning up after shim disconnected" id=3887dfc5081d520e32b3db2665cef71365cd936699e7f82fc1f53203c1e41a4f namespace=k8s.io Dec 13 14:28:37.024233 env[1216]: time="2024-12-13T14:28:37.024175012Z" level=info msg="cleaning up dead shim" Dec 13 14:28:37.031256 env[1216]: time="2024-12-13T14:28:37.031213912Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3257 runtime=io.containerd.runc.v2\n" Dec 13 14:28:37.220926 kubelet[1422]: E1213 14:28:37.220859 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:37.754462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3887dfc5081d520e32b3db2665cef71365cd936699e7f82fc1f53203c1e41a4f-rootfs.mount: Deactivated successfully. Dec 13 14:28:37.806558 kubelet[1422]: E1213 14:28:37.806516 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:37.806558 kubelet[1422]: E1213 14:28:37.806533 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:37.808565 env[1216]: time="2024-12-13T14:28:37.808514395Z" level=info msg="CreateContainer within sandbox \"fb93e253990ce6e0743cf8eac06456815dfbd5bcd6a31cc90c79fa76f4232c5f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:28:38.087955 env[1216]: time="2024-12-13T14:28:38.087848043Z" level=info msg="CreateContainer within sandbox \"fb93e253990ce6e0743cf8eac06456815dfbd5bcd6a31cc90c79fa76f4232c5f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"89df217e59c744e73cacbaaa4b773fed6b3cda13165d7d0cdd50b39aba70f879\"" Dec 13 14:28:38.088567 env[1216]: time="2024-12-13T14:28:38.088539481Z" level=info msg="StartContainer for \"89df217e59c744e73cacbaaa4b773fed6b3cda13165d7d0cdd50b39aba70f879\"" Dec 13 14:28:38.104070 systemd[1]: Started cri-containerd-89df217e59c744e73cacbaaa4b773fed6b3cda13165d7d0cdd50b39aba70f879.scope. Dec 13 14:28:38.125814 systemd[1]: cri-containerd-89df217e59c744e73cacbaaa4b773fed6b3cda13165d7d0cdd50b39aba70f879.scope: Deactivated successfully. Dec 13 14:28:38.127283 env[1216]: time="2024-12-13T14:28:38.127247451Z" level=info msg="StartContainer for \"89df217e59c744e73cacbaaa4b773fed6b3cda13165d7d0cdd50b39aba70f879\" returns successfully" Dec 13 14:28:38.148080 env[1216]: time="2024-12-13T14:28:38.148027264Z" level=info msg="shim disconnected" id=89df217e59c744e73cacbaaa4b773fed6b3cda13165d7d0cdd50b39aba70f879 Dec 13 14:28:38.148080 env[1216]: time="2024-12-13T14:28:38.148081516Z" level=warning msg="cleaning up after shim disconnected" id=89df217e59c744e73cacbaaa4b773fed6b3cda13165d7d0cdd50b39aba70f879 namespace=k8s.io Dec 13 14:28:38.148080 env[1216]: time="2024-12-13T14:28:38.148090022Z" level=info msg="cleaning up dead shim" Dec 13 14:28:38.155744 env[1216]: time="2024-12-13T14:28:38.155688421Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3310 runtime=io.containerd.runc.v2\n" Dec 13 14:28:38.221708 kubelet[1422]: E1213 14:28:38.221628 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:38.755189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89df217e59c744e73cacbaaa4b773fed6b3cda13165d7d0cdd50b39aba70f879-rootfs.mount: Deactivated successfully. Dec 13 14:28:38.810301 kubelet[1422]: E1213 14:28:38.810258 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:38.812659 env[1216]: time="2024-12-13T14:28:38.812624690Z" level=info msg="CreateContainer within sandbox \"fb93e253990ce6e0743cf8eac06456815dfbd5bcd6a31cc90c79fa76f4232c5f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:28:38.831612 env[1216]: time="2024-12-13T14:28:38.831544940Z" level=info msg="CreateContainer within sandbox \"fb93e253990ce6e0743cf8eac06456815dfbd5bcd6a31cc90c79fa76f4232c5f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"01cced45139363499399e93c792f00aacf345987dd5456a4d0bd93c8d99a3b0c\"" Dec 13 14:28:38.832118 env[1216]: time="2024-12-13T14:28:38.832091556Z" level=info msg="StartContainer for \"01cced45139363499399e93c792f00aacf345987dd5456a4d0bd93c8d99a3b0c\"" Dec 13 14:28:38.853000 systemd[1]: Started cri-containerd-01cced45139363499399e93c792f00aacf345987dd5456a4d0bd93c8d99a3b0c.scope. Dec 13 14:28:38.938015 env[1216]: time="2024-12-13T14:28:38.937944940Z" level=info msg="StartContainer for \"01cced45139363499399e93c792f00aacf345987dd5456a4d0bd93c8d99a3b0c\" returns successfully" Dec 13 14:28:39.222530 kubelet[1422]: E1213 14:28:39.222454 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:39.413399 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:28:39.814817 kubelet[1422]: E1213 14:28:39.814761 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:40.178072 systemd[1]: run-containerd-runc-k8s.io-01cced45139363499399e93c792f00aacf345987dd5456a4d0bd93c8d99a3b0c-runc.W1dwRY.mount: Deactivated successfully. Dec 13 14:28:40.223496 kubelet[1422]: E1213 14:28:40.223458 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:41.223783 kubelet[1422]: E1213 14:28:41.223700 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:41.239106 kubelet[1422]: E1213 14:28:41.239067 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:42.224317 kubelet[1422]: E1213 14:28:42.224246 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:42.240889 systemd-networkd[1043]: lxc_health: Link UP Dec 13 14:28:42.251448 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:28:42.251869 systemd-networkd[1043]: lxc_health: Gained carrier Dec 13 14:28:42.318641 systemd[1]: run-containerd-runc-k8s.io-01cced45139363499399e93c792f00aacf345987dd5456a4d0bd93c8d99a3b0c-runc.UNrIen.mount: Deactivated successfully. Dec 13 14:28:42.403917 kubelet[1422]: E1213 14:28:42.403867 1422 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:34054->127.0.0.1:44465: read tcp 127.0.0.1:34054->127.0.0.1:44465: read: connection reset by peer Dec 13 14:28:43.224449 kubelet[1422]: E1213 14:28:43.224406 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:43.239925 kubelet[1422]: E1213 14:28:43.239877 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:43.256987 kubelet[1422]: I1213 14:28:43.256895 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fksmj" podStartSLOduration=9.256866214 podStartE2EDuration="9.256866214s" podCreationTimestamp="2024-12-13 14:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:28:39.996541562 +0000 UTC m=+67.489624228" watchObservedRunningTime="2024-12-13 14:28:43.256866214 +0000 UTC m=+70.749948850" Dec 13 14:28:43.821533 kubelet[1422]: E1213 14:28:43.821478 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:43.823621 systemd-networkd[1043]: lxc_health: Gained IPv6LL Dec 13 14:28:44.225632 kubelet[1422]: E1213 14:28:44.225548 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:44.488827 systemd[1]: run-containerd-runc-k8s.io-01cced45139363499399e93c792f00aacf345987dd5456a4d0bd93c8d99a3b0c-runc.kcSRPy.mount: Deactivated successfully. Dec 13 14:28:44.823333 kubelet[1422]: E1213 14:28:44.823208 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:45.226177 kubelet[1422]: E1213 14:28:45.226093 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:46.227255 kubelet[1422]: E1213 14:28:46.227172 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:46.653190 systemd[1]: run-containerd-runc-k8s.io-01cced45139363499399e93c792f00aacf345987dd5456a4d0bd93c8d99a3b0c-runc.mM5DAJ.mount: Deactivated successfully. Dec 13 14:28:47.228200 kubelet[1422]: E1213 14:28:47.228112 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:48.228761 kubelet[1422]: E1213 14:28:48.228696 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:49.229130 kubelet[1422]: E1213 14:28:49.229031 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:50.229738 kubelet[1422]: E1213 14:28:50.229672 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"