Dec 13 14:30:57.861085 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:30:57.861115 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:30:57.861125 kernel: BIOS-provided physical RAM map: Dec 13 14:30:57.861162 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:30:57.861169 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:30:57.861176 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:30:57.861185 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 14:30:57.861193 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 14:30:57.861202 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 14:30:57.861209 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 14:30:57.861217 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 14:30:57.861224 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:30:57.861237 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 14:30:57.861245 kernel: NX (Execute Disable) protection: active Dec 13 14:30:57.861269 kernel: SMBIOS 2.8 present. Dec 13 14:30:57.861278 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 14:30:57.861285 kernel: Hypervisor detected: KVM Dec 13 14:30:57.861293 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:30:57.861301 kernel: kvm-clock: cpu 0, msr 5019a001, primary cpu clock Dec 13 14:30:57.861309 kernel: kvm-clock: using sched offset of 2459529566 cycles Dec 13 14:30:57.861317 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:30:57.861325 kernel: tsc: Detected 2794.748 MHz processor Dec 13 14:30:57.861334 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:30:57.861345 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:30:57.861353 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 14:30:57.861361 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:30:57.861370 kernel: Using GB pages for direct mapping Dec 13 14:30:57.861378 kernel: ACPI: Early table checksum verification disabled Dec 13 14:30:57.861386 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 14:30:57.861394 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:30:57.861402 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:30:57.861410 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:30:57.861420 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 14:30:57.861428 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:30:57.861436 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:30:57.861445 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:30:57.861453 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:30:57.861461 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 14:30:57.861469 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 14:30:57.861477 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 14:30:57.861490 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 14:30:57.861499 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 14:30:57.861508 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 14:30:57.861516 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 14:30:57.861525 kernel: No NUMA configuration found Dec 13 14:30:57.861534 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 14:30:57.861544 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 14:30:57.861553 kernel: Zone ranges: Dec 13 14:30:57.861561 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:30:57.861570 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 14:30:57.861578 kernel: Normal empty Dec 13 14:30:57.861587 kernel: Movable zone start for each node Dec 13 14:30:57.861595 kernel: Early memory node ranges Dec 13 14:30:57.861604 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:30:57.861613 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 14:30:57.861621 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 14:30:57.861632 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:30:57.861641 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:30:57.861649 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 14:30:57.861658 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 14:30:57.861672 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:30:57.861681 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:30:57.861690 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 14:30:57.861698 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:30:57.861707 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:30:57.861718 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:30:57.861727 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:30:57.861738 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:30:57.861747 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 14:30:57.861756 kernel: TSC deadline timer available Dec 13 14:30:57.861767 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 14:30:57.861776 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 14:30:57.861793 kernel: kvm-guest: setup PV sched yield Dec 13 14:30:57.861802 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 14:30:57.861813 kernel: Booting paravirtualized kernel on KVM Dec 13 14:30:57.861822 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:30:57.861831 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Dec 13 14:30:57.861839 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Dec 13 14:30:57.861848 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Dec 13 14:30:57.861857 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 14:30:57.861865 kernel: kvm-guest: setup async PF for cpu 0 Dec 13 14:30:57.861874 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Dec 13 14:30:57.861882 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:30:57.861893 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:30:57.861901 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 14:30:57.861910 kernel: Policy zone: DMA32 Dec 13 14:30:57.861920 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:30:57.861929 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:30:57.861938 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:30:57.861947 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:30:57.861956 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:30:57.861967 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 134796K reserved, 0K cma-reserved) Dec 13 14:30:57.861975 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 14:30:57.861984 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:30:57.861993 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:30:57.862002 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:30:57.862011 kernel: rcu: RCU event tracing is enabled. Dec 13 14:30:57.862020 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 14:30:57.862029 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:30:57.862038 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:30:57.862048 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:30:57.862057 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 14:30:57.862066 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 14:30:57.862075 kernel: random: crng init done Dec 13 14:30:57.862083 kernel: Console: colour VGA+ 80x25 Dec 13 14:30:57.862092 kernel: printk: console [ttyS0] enabled Dec 13 14:30:57.862101 kernel: ACPI: Core revision 20210730 Dec 13 14:30:57.862109 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 14:30:57.862118 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:30:57.862128 kernel: x2apic enabled Dec 13 14:30:57.862146 kernel: Switched APIC routing to physical x2apic. Dec 13 14:30:57.862155 kernel: kvm-guest: setup PV IPIs Dec 13 14:30:57.862163 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 14:30:57.862172 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 14:30:57.862181 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 14:30:57.862190 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 14:30:57.862198 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 14:30:57.862207 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 14:30:57.862224 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:30:57.862233 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:30:57.862242 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:30:57.862252 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:30:57.862274 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 14:30:57.862283 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 14:30:57.862292 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 14:30:57.862301 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 14:30:57.862311 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:30:57.862322 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:30:57.862331 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:30:57.862340 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:30:57.862350 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 14:30:57.862359 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:30:57.862368 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:30:57.862377 kernel: LSM: Security Framework initializing Dec 13 14:30:57.862386 kernel: SELinux: Initializing. Dec 13 14:30:57.862396 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:30:57.862406 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:30:57.862415 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 14:30:57.862424 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 14:30:57.862433 kernel: ... version: 0 Dec 13 14:30:57.862442 kernel: ... bit width: 48 Dec 13 14:30:57.862451 kernel: ... generic registers: 6 Dec 13 14:30:57.862460 kernel: ... value mask: 0000ffffffffffff Dec 13 14:30:57.862469 kernel: ... max period: 00007fffffffffff Dec 13 14:30:57.862480 kernel: ... fixed-purpose events: 0 Dec 13 14:30:57.862489 kernel: ... event mask: 000000000000003f Dec 13 14:30:57.862498 kernel: signal: max sigframe size: 1776 Dec 13 14:30:57.862507 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:30:57.862516 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:30:57.862525 kernel: x86: Booting SMP configuration: Dec 13 14:30:57.862534 kernel: .... node #0, CPUs: #1 Dec 13 14:30:57.862543 kernel: kvm-clock: cpu 1, msr 5019a041, secondary cpu clock Dec 13 14:30:57.862552 kernel: kvm-guest: setup async PF for cpu 1 Dec 13 14:30:57.862563 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Dec 13 14:30:57.862572 kernel: #2 Dec 13 14:30:57.862581 kernel: kvm-clock: cpu 2, msr 5019a081, secondary cpu clock Dec 13 14:30:57.862590 kernel: kvm-guest: setup async PF for cpu 2 Dec 13 14:30:57.862600 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Dec 13 14:30:57.862608 kernel: #3 Dec 13 14:30:57.862617 kernel: kvm-clock: cpu 3, msr 5019a0c1, secondary cpu clock Dec 13 14:30:57.862626 kernel: kvm-guest: setup async PF for cpu 3 Dec 13 14:30:57.862635 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Dec 13 14:30:57.862646 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 14:30:57.862655 kernel: smpboot: Max logical packages: 1 Dec 13 14:30:57.862665 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 14:30:57.862674 kernel: devtmpfs: initialized Dec 13 14:30:57.862684 kernel: x86/mm: Memory block size: 128MB Dec 13 14:30:57.862695 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:30:57.862704 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 14:30:57.862713 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:30:57.862722 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:30:57.862731 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:30:57.862742 kernel: audit: type=2000 audit(1734100257.172:1): state=initialized audit_enabled=0 res=1 Dec 13 14:30:57.862750 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:30:57.862759 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:30:57.862768 kernel: cpuidle: using governor menu Dec 13 14:30:57.862777 kernel: ACPI: bus type PCI registered Dec 13 14:30:57.862786 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:30:57.862795 kernel: dca service started, version 1.12.1 Dec 13 14:30:57.862804 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 14:30:57.862813 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 14:30:57.862824 kernel: PCI: Using configuration type 1 for base access Dec 13 14:30:57.862833 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:30:57.862842 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:30:57.862851 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:30:57.862860 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:30:57.862869 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:30:57.862884 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:30:57.862893 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:30:57.862902 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:30:57.862913 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:30:57.862922 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:30:57.862931 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:30:57.862940 kernel: ACPI: Interpreter enabled Dec 13 14:30:57.862949 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 14:30:57.862957 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:30:57.862966 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:30:57.862976 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 14:30:57.862985 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:30:57.863141 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:30:57.863242 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 14:30:57.863359 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 14:30:57.863373 kernel: PCI host bridge to bus 0000:00 Dec 13 14:30:57.863470 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:30:57.863555 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:30:57.863641 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:30:57.863723 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 14:30:57.863853 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 14:30:57.863937 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 14:30:57.864021 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:30:57.864128 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 14:30:57.864243 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 14:30:57.864406 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 14:30:57.864507 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 14:30:57.864602 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 14:30:57.864698 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:30:57.864815 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:30:57.864921 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 14:30:57.865022 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 14:30:57.865116 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 14:30:57.865229 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 14:30:57.865342 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 14:30:57.865438 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 14:30:57.865531 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 14:30:57.865641 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 14:30:57.865750 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 14:30:57.865849 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 14:30:57.865949 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 14:30:57.866054 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 14:30:57.866164 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 14:30:57.866275 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 14:30:57.866380 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 14:30:57.866531 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 14:30:57.866666 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 14:30:57.866777 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 14:30:57.866881 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 14:30:57.866895 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:30:57.866904 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:30:57.866914 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:30:57.866926 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:30:57.866935 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 14:30:57.866944 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 14:30:57.866953 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 14:30:57.866962 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 14:30:57.866971 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 14:30:57.866980 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 14:30:57.866989 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 14:30:57.866998 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 14:30:57.867009 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 14:30:57.867018 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 14:30:57.867027 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 14:30:57.867036 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 14:30:57.867045 kernel: iommu: Default domain type: Translated Dec 13 14:30:57.867092 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:30:57.867207 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 14:30:57.867318 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:30:57.867492 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 14:30:57.867511 kernel: vgaarb: loaded Dec 13 14:30:57.867520 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:30:57.867529 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:30:57.867539 kernel: PTP clock support registered Dec 13 14:30:57.867548 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:30:57.867557 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:30:57.867566 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:30:57.867575 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 14:30:57.867584 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 14:30:57.867595 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 14:30:57.867604 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:30:57.867621 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:30:57.867630 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:30:57.867648 kernel: pnp: PnP ACPI init Dec 13 14:30:57.867778 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 14:30:57.867806 kernel: pnp: PnP ACPI: found 6 devices Dec 13 14:30:57.867816 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:30:57.867828 kernel: NET: Registered PF_INET protocol family Dec 13 14:30:57.867837 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:30:57.867858 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:30:57.867867 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:30:57.867877 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:30:57.867886 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:30:57.867895 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:30:57.867916 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:30:57.867926 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:30:57.867937 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:30:57.867946 kernel: NET: Registered PF_XDP protocol family Dec 13 14:30:57.868068 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:30:57.868194 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:30:57.868339 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:30:57.868446 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 14:30:57.868554 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 14:30:57.868674 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 14:30:57.868692 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:30:57.868701 kernel: Initialise system trusted keyrings Dec 13 14:30:57.868710 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:30:57.868731 kernel: Key type asymmetric registered Dec 13 14:30:57.868742 kernel: Asymmetric key parser 'x509' registered Dec 13 14:30:57.868752 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:30:57.868762 kernel: io scheduler mq-deadline registered Dec 13 14:30:57.868785 kernel: io scheduler kyber registered Dec 13 14:30:57.868797 kernel: io scheduler bfq registered Dec 13 14:30:57.868808 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:30:57.868818 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 14:30:57.868828 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 14:30:57.868837 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 14:30:57.868864 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:30:57.868874 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:30:57.868884 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:30:57.868893 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:30:57.868914 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:30:57.868926 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:30:57.869082 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 14:30:57.869213 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 14:30:57.869350 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T14:30:57 UTC (1734100257) Dec 13 14:30:57.874401 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 14:30:57.874435 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:30:57.874444 kernel: Segment Routing with IPv6 Dec 13 14:30:57.874452 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:30:57.874463 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:30:57.874471 kernel: Key type dns_resolver registered Dec 13 14:30:57.874479 kernel: IPI shorthand broadcast: enabled Dec 13 14:30:57.874486 kernel: sched_clock: Marking stable (441095571, 104496479)->(561528897, -15936847) Dec 13 14:30:57.874494 kernel: registered taskstats version 1 Dec 13 14:30:57.874501 kernel: Loading compiled-in X.509 certificates Dec 13 14:30:57.874509 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:30:57.874517 kernel: Key type .fscrypt registered Dec 13 14:30:57.874523 kernel: Key type fscrypt-provisioning registered Dec 13 14:30:57.874532 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:30:57.874540 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:30:57.874547 kernel: ima: No architecture policies found Dec 13 14:30:57.874555 kernel: clk: Disabling unused clocks Dec 13 14:30:57.874562 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:30:57.874569 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:30:57.874577 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:30:57.874584 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:30:57.874592 kernel: Run /init as init process Dec 13 14:30:57.874603 kernel: with arguments: Dec 13 14:30:57.874619 kernel: /init Dec 13 14:30:57.874628 kernel: with environment: Dec 13 14:30:57.874637 kernel: HOME=/ Dec 13 14:30:57.874646 kernel: TERM=linux Dec 13 14:30:57.874655 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:30:57.874668 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:30:57.874682 systemd[1]: Detected virtualization kvm. Dec 13 14:30:57.874696 systemd[1]: Detected architecture x86-64. Dec 13 14:30:57.874704 systemd[1]: Running in initrd. Dec 13 14:30:57.874712 systemd[1]: No hostname configured, using default hostname. Dec 13 14:30:57.874721 systemd[1]: Hostname set to . Dec 13 14:30:57.874730 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:30:57.874740 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:30:57.874750 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:30:57.874759 systemd[1]: Reached target cryptsetup.target. Dec 13 14:30:57.874771 systemd[1]: Reached target paths.target. Dec 13 14:30:57.874790 systemd[1]: Reached target slices.target. Dec 13 14:30:57.874800 systemd[1]: Reached target swap.target. Dec 13 14:30:57.874810 systemd[1]: Reached target timers.target. Dec 13 14:30:57.874820 systemd[1]: Listening on iscsid.socket. Dec 13 14:30:57.874831 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:30:57.874841 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:30:57.874851 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:30:57.874861 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:30:57.874870 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:30:57.874880 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:30:57.874890 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:30:57.874900 systemd[1]: Reached target sockets.target. Dec 13 14:30:57.874910 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:30:57.874921 systemd[1]: Finished network-cleanup.service. Dec 13 14:30:57.874931 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:30:57.874940 systemd[1]: Starting systemd-journald.service... Dec 13 14:30:57.874948 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:30:57.874956 systemd[1]: Starting systemd-resolved.service... Dec 13 14:30:57.874964 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:30:57.874972 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:30:57.874980 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:30:57.874989 kernel: audit: type=1130 audit(1734100257.866:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.874998 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:30:57.875011 systemd-journald[197]: Journal started Dec 13 14:30:57.875059 systemd-journald[197]: Runtime Journal (/run/log/journal/056d794364d14707beb5296ef4c8f3ea) is 6.0M, max 48.5M, 42.5M free. Dec 13 14:30:57.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.869532 systemd-modules-load[198]: Inserted module 'overlay' Dec 13 14:30:57.878018 systemd-resolved[199]: Positive Trust Anchors: Dec 13 14:30:57.878027 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:30:57.908562 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:30:57.908581 systemd[1]: Started systemd-journald.service. Dec 13 14:30:57.878054 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:30:57.918389 kernel: audit: type=1130 audit(1734100257.909:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.880531 systemd-resolved[199]: Defaulting to hostname 'linux'. Dec 13 14:30:57.922747 kernel: audit: type=1130 audit(1734100257.918:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.910040 systemd[1]: Started systemd-resolved.service. Dec 13 14:30:57.927780 kernel: Bridge firewalling registered Dec 13 14:30:57.927798 kernel: audit: type=1130 audit(1734100257.923:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.918776 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:30:57.932398 kernel: audit: type=1130 audit(1734100257.927:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.924052 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:30:57.927767 systemd-modules-load[198]: Inserted module 'br_netfilter' Dec 13 14:30:57.928323 systemd[1]: Reached target nss-lookup.target. Dec 13 14:30:57.934027 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:30:57.947958 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:30:57.949626 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:30:57.953895 kernel: audit: type=1130 audit(1734100257.947:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.955272 kernel: SCSI subsystem initialized Dec 13 14:30:57.961463 dracut-cmdline[216]: dracut-dracut-053 Dec 13 14:30:57.963232 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:30:57.970681 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:30:57.970703 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:30:57.971933 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:30:57.974598 systemd-modules-load[198]: Inserted module 'dm_multipath' Dec 13 14:30:57.975171 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:30:57.980274 kernel: audit: type=1130 audit(1734100257.975:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.979193 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:30:57.986937 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:30:57.991219 kernel: audit: type=1130 audit(1734100257.986:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:58.027281 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:30:58.044280 kernel: iscsi: registered transport (tcp) Dec 13 14:30:58.065290 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:30:58.065311 kernel: QLogic iSCSI HBA Driver Dec 13 14:30:58.091272 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:30:58.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:58.093722 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:30:58.096280 kernel: audit: type=1130 audit(1734100258.092:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:58.138279 kernel: raid6: avx2x4 gen() 30430 MB/s Dec 13 14:30:58.155276 kernel: raid6: avx2x4 xor() 8220 MB/s Dec 13 14:30:58.172276 kernel: raid6: avx2x2 gen() 32460 MB/s Dec 13 14:30:58.189274 kernel: raid6: avx2x2 xor() 18976 MB/s Dec 13 14:30:58.206273 kernel: raid6: avx2x1 gen() 26375 MB/s Dec 13 14:30:58.223280 kernel: raid6: avx2x1 xor() 15069 MB/s Dec 13 14:30:58.240280 kernel: raid6: sse2x4 gen() 14500 MB/s Dec 13 14:30:58.257278 kernel: raid6: sse2x4 xor() 7425 MB/s Dec 13 14:30:58.274271 kernel: raid6: sse2x2 gen() 16123 MB/s Dec 13 14:30:58.291272 kernel: raid6: sse2x2 xor() 9670 MB/s Dec 13 14:30:58.308277 kernel: raid6: sse2x1 gen() 12224 MB/s Dec 13 14:30:58.325701 kernel: raid6: sse2x1 xor() 7717 MB/s Dec 13 14:30:58.325721 kernel: raid6: using algorithm avx2x2 gen() 32460 MB/s Dec 13 14:30:58.325730 kernel: raid6: .... xor() 18976 MB/s, rmw enabled Dec 13 14:30:58.326437 kernel: raid6: using avx2x2 recovery algorithm Dec 13 14:30:58.338273 kernel: xor: automatically using best checksumming function avx Dec 13 14:30:58.428293 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:30:58.436087 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:30:58.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:58.437000 audit: BPF prog-id=7 op=LOAD Dec 13 14:30:58.437000 audit: BPF prog-id=8 op=LOAD Dec 13 14:30:58.438500 systemd[1]: Starting systemd-udevd.service... Dec 13 14:30:58.450618 systemd-udevd[399]: Using default interface naming scheme 'v252'. Dec 13 14:30:58.454617 systemd[1]: Started systemd-udevd.service. Dec 13 14:30:58.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:58.455642 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:30:58.465040 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Dec 13 14:30:58.490690 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:30:58.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:58.492507 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:30:58.529399 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:30:58.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:58.559696 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 14:30:58.604171 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:30:58.604192 kernel: libata version 3.00 loaded. Dec 13 14:30:58.604209 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:30:58.604223 kernel: GPT:9289727 != 19775487 Dec 13 14:30:58.604244 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:30:58.604315 kernel: GPT:9289727 != 19775487 Dec 13 14:30:58.604330 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:30:58.604343 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:30:58.604357 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:30:58.604370 kernel: AES CTR mode by8 optimization enabled Dec 13 14:30:58.604383 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 14:30:58.628792 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 14:30:58.628808 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 14:30:58.628903 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 14:30:58.628979 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (453) Dec 13 14:30:58.628989 kernel: scsi host0: ahci Dec 13 14:30:58.629076 kernel: scsi host1: ahci Dec 13 14:30:58.629173 kernel: scsi host2: ahci Dec 13 14:30:58.629276 kernel: scsi host3: ahci Dec 13 14:30:58.629364 kernel: scsi host4: ahci Dec 13 14:30:58.629461 kernel: scsi host5: ahci Dec 13 14:30:58.629544 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 14:30:58.629554 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 14:30:58.629563 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 14:30:58.629572 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 14:30:58.629581 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 14:30:58.629590 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 14:30:58.625783 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:30:58.661965 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:30:58.681510 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:30:58.684653 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:30:58.692617 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:30:58.695748 systemd[1]: Starting disk-uuid.service... Dec 13 14:30:58.761380 disk-uuid[530]: Primary Header is updated. Dec 13 14:30:58.761380 disk-uuid[530]: Secondary Entries is updated. Dec 13 14:30:58.761380 disk-uuid[530]: Secondary Header is updated. Dec 13 14:30:58.765600 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:30:58.767276 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:30:58.939718 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 14:30:58.939787 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 14:30:58.939797 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 14:30:58.941281 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 14:30:58.942290 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 14:30:58.943293 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 14:30:58.944281 kernel: ata3.00: applying bridge limits Dec 13 14:30:58.944296 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 14:30:58.945284 kernel: ata3.00: configured for UDMA/100 Dec 13 14:30:58.946283 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 14:30:58.978310 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 14:30:58.996124 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:30:58.996142 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 14:30:59.768282 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:30:59.768580 disk-uuid[531]: The operation has completed successfully. Dec 13 14:30:59.783784 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:30:59.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.783861 systemd[1]: Finished disk-uuid.service. Dec 13 14:30:59.795724 systemd[1]: Starting verity-setup.service... Dec 13 14:30:59.808293 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 14:30:59.827004 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:30:59.828603 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:30:59.830978 systemd[1]: Finished verity-setup.service. Dec 13 14:30:59.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.888287 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:30:59.888706 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:30:59.889625 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:30:59.890283 systemd[1]: Starting ignition-setup.service... Dec 13 14:30:59.892977 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:30:59.900165 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:30:59.900193 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:30:59.900203 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:30:59.909173 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:30:59.917861 systemd[1]: Finished ignition-setup.service. Dec 13 14:30:59.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.919743 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:30:59.955760 ignition[647]: Ignition 2.14.0 Dec 13 14:30:59.955771 ignition[647]: Stage: fetch-offline Dec 13 14:30:59.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.958000 audit: BPF prog-id=9 op=LOAD Dec 13 14:30:59.956512 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:30:59.955860 ignition[647]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:30:59.959714 systemd[1]: Starting systemd-networkd.service... Dec 13 14:30:59.955871 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:30:59.955980 ignition[647]: parsed url from cmdline: "" Dec 13 14:30:59.955984 ignition[647]: no config URL provided Dec 13 14:30:59.955990 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:30:59.955998 ignition[647]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:30:59.956018 ignition[647]: op(1): [started] loading QEMU firmware config module Dec 13 14:30:59.956028 ignition[647]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 14:30:59.959662 ignition[647]: op(1): [finished] loading QEMU firmware config module Dec 13 14:30:59.962602 ignition[647]: parsing config with SHA512: 5abba1575aa52d56ba60873e2533ab609a3540a77cbf19723f5cfcb50c48d378da98e865debd068e872ce2df2b6e046d962edf47d0124ede6bc90dcbb82a9e5c Dec 13 14:30:59.969289 unknown[647]: fetched base config from "system" Dec 13 14:30:59.970067 ignition[647]: fetch-offline: fetch-offline passed Dec 13 14:30:59.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.969304 unknown[647]: fetched user config from "qemu" Dec 13 14:30:59.970156 ignition[647]: Ignition finished successfully Dec 13 14:30:59.971206 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:30:59.986643 systemd-networkd[720]: lo: Link UP Dec 13 14:30:59.986652 systemd-networkd[720]: lo: Gained carrier Dec 13 14:30:59.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.987035 systemd-networkd[720]: Enumeration completed Dec 13 14:30:59.987155 systemd[1]: Started systemd-networkd.service. Dec 13 14:30:59.987293 systemd-networkd[720]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:30:59.988148 systemd-networkd[720]: eth0: Link UP Dec 13 14:30:59.988153 systemd-networkd[720]: eth0: Gained carrier Dec 13 14:30:59.988875 systemd[1]: Reached target network.target. Dec 13 14:30:59.990540 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 14:30:59.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.999743 ignition[722]: Ignition 2.14.0 Dec 13 14:30:59.991158 systemd[1]: Starting ignition-kargs.service... Dec 13 14:30:59.999749 ignition[722]: Stage: kargs Dec 13 14:30:59.992615 systemd[1]: Starting iscsiuio.service... Dec 13 14:30:59.999830 ignition[722]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:30:59.996586 systemd[1]: Started iscsiuio.service. Dec 13 14:30:59.999838 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:30:59.998884 systemd[1]: Starting iscsid.service... Dec 13 14:31:00.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:00.000564 ignition[722]: kargs: kargs passed Dec 13 14:31:00.001371 systemd-networkd[720]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:31:00.000599 ignition[722]: Ignition finished successfully Dec 13 14:31:00.005131 systemd[1]: Finished ignition-kargs.service. Dec 13 14:31:00.007375 systemd[1]: Starting ignition-disks.service... Dec 13 14:31:00.012884 ignition[733]: Ignition 2.14.0 Dec 13 14:31:00.012892 ignition[733]: Stage: disks Dec 13 14:31:00.012966 ignition[733]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:31:00.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:00.014196 systemd[1]: Finished ignition-disks.service. Dec 13 14:31:00.012973 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:31:00.014657 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:31:00.013687 ignition[733]: disks: disks passed Dec 13 14:31:00.016119 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:31:00.013714 ignition[733]: Ignition finished successfully Dec 13 14:31:00.017982 systemd[1]: Reached target local-fs.target. Dec 13 14:31:00.018697 systemd[1]: Reached target sysinit.target. Dec 13 14:31:00.020811 systemd[1]: Reached target basic.target. Dec 13 14:31:00.026230 iscsid[732]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:31:00.026230 iscsid[732]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:31:00.026230 iscsid[732]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:31:00.026230 iscsid[732]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:31:00.026230 iscsid[732]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:31:00.036108 iscsid[732]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:31:00.038232 systemd[1]: Started iscsid.service. Dec 13 14:31:00.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:00.039326 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:31:00.049571 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:31:00.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:00.050134 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:31:00.051541 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:31:00.051857 systemd[1]: Reached target remote-fs.target. Dec 13 14:31:00.055285 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:31:00.061969 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:31:00.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:00.064338 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:31:00.078531 systemd-fsck[754]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 14:31:00.096245 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:31:00.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:00.099282 systemd[1]: Mounting sysroot.mount... Dec 13 14:31:00.106282 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:31:00.107023 systemd[1]: Mounted sysroot.mount. Dec 13 14:31:00.108513 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:31:00.111883 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:31:00.113875 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:31:00.113924 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:31:00.115538 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:31:00.119775 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:31:00.121927 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:31:00.126737 initrd-setup-root[764]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:31:00.132162 initrd-setup-root[772]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:31:00.135478 initrd-setup-root[780]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:31:00.139314 initrd-setup-root[788]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:31:00.165019 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:31:00.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:00.167375 systemd[1]: Starting ignition-mount.service... Dec 13 14:31:00.169383 systemd[1]: Starting sysroot-boot.service... Dec 13 14:31:00.172226 bash[805]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 14:31:00.181156 ignition[806]: INFO : Ignition 2.14.0 Dec 13 14:31:00.181156 ignition[806]: INFO : Stage: mount Dec 13 14:31:00.182892 ignition[806]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:31:00.182892 ignition[806]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:31:00.182892 ignition[806]: INFO : mount: mount passed Dec 13 14:31:00.182892 ignition[806]: INFO : Ignition finished successfully Dec 13 14:31:00.187464 systemd[1]: Finished ignition-mount.service. Dec 13 14:31:00.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:00.189844 systemd[1]: Finished sysroot-boot.service. Dec 13 14:31:00.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:00.837445 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:31:00.845687 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) Dec 13 14:31:00.845713 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:31:00.845723 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:31:00.846508 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:31:00.850368 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:31:00.852566 systemd[1]: Starting ignition-files.service... Dec 13 14:31:00.865117 ignition[835]: INFO : Ignition 2.14.0 Dec 13 14:31:00.865117 ignition[835]: INFO : Stage: files Dec 13 14:31:00.866963 ignition[835]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:31:00.866963 ignition[835]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:31:00.866963 ignition[835]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:31:00.870567 ignition[835]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:31:00.872028 ignition[835]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:31:00.873840 ignition[835]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:31:00.875231 ignition[835]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:31:00.876973 unknown[835]: wrote ssh authorized keys file for user: core Dec 13 14:31:00.878077 ignition[835]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:31:00.879642 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:31:00.881379 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:31:00.883049 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:31:00.884848 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:31:00.886629 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:31:00.888448 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:31:00.890521 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:31:00.890521 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:31:00.890521 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:31:00.890521 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 14:31:01.244298 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Dec 13 14:31:01.651349 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:31:01.651349 ignition[835]: INFO : files: op(8): [started] processing unit "containerd.service" Dec 13 14:31:01.655666 ignition[835]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:31:01.655666 ignition[835]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:31:01.655666 ignition[835]: INFO : files: op(8): [finished] processing unit "containerd.service" Dec 13 14:31:01.655666 ignition[835]: INFO : files: op(a): [started] processing unit "coreos-metadata.service" Dec 13 14:31:01.655666 ignition[835]: INFO : files: op(a): op(b): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:31:01.655666 ignition[835]: INFO : files: op(a): op(b): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:31:01.655666 ignition[835]: INFO : files: op(a): [finished] processing unit "coreos-metadata.service" Dec 13 14:31:01.655666 ignition[835]: INFO : files: op(c): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 14:31:01.655666 ignition[835]: INFO : files: op(c): op(d): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:31:01.684130 ignition[835]: INFO : files: op(c): op(d): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:31:01.685802 ignition[835]: INFO : files: op(c): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 14:31:01.685802 ignition[835]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:31:01.685802 ignition[835]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:31:01.685802 ignition[835]: INFO : files: files passed Dec 13 14:31:01.685802 ignition[835]: INFO : Ignition finished successfully Dec 13 14:31:01.692669 systemd[1]: Finished ignition-files.service. Dec 13 14:31:01.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.694000 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:31:01.694866 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:31:01.695462 systemd[1]: Starting ignition-quench.service... Dec 13 14:31:01.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.697537 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:31:01.697597 systemd[1]: Finished ignition-quench.service. Dec 13 14:31:01.703607 initrd-setup-root-after-ignition[861]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 14:31:01.706147 initrd-setup-root-after-ignition[863]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:31:01.706516 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:31:01.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.708686 systemd[1]: Reached target ignition-complete.target. Dec 13 14:31:01.710806 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:31:01.720240 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:31:01.720320 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:31:01.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.722088 systemd[1]: Reached target initrd-fs.target. Dec 13 14:31:01.723585 systemd[1]: Reached target initrd.target. Dec 13 14:31:01.725093 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:31:01.725674 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:31:01.733765 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:31:01.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.734565 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:31:01.742826 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:31:01.743625 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:31:01.745371 systemd[1]: Stopped target timers.target. Dec 13 14:31:01.746846 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:31:01.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.746948 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:31:01.748390 systemd[1]: Stopped target initrd.target. Dec 13 14:31:01.748832 systemd[1]: Stopped target basic.target. Dec 13 14:31:01.750975 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:31:01.752221 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:31:01.753735 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:31:01.756710 systemd[1]: Stopped target remote-fs.target. Dec 13 14:31:01.757251 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:31:01.759301 systemd[1]: Stopped target sysinit.target. Dec 13 14:31:01.760787 systemd[1]: Stopped target local-fs.target. Dec 13 14:31:01.762160 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:31:01.763556 systemd[1]: Stopped target swap.target. Dec 13 14:31:01.763846 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:31:01.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.763947 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:31:01.766398 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:31:01.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.767958 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:31:01.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.768071 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:31:01.769659 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:31:01.769761 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:31:01.770957 systemd[1]: Stopped target paths.target. Dec 13 14:31:01.772575 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:31:01.777291 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:31:01.777607 systemd[1]: Stopped target slices.target. Dec 13 14:31:01.779282 systemd[1]: Stopped target sockets.target. Dec 13 14:31:01.780652 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:31:01.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.780740 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:31:01.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.782072 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:31:01.782153 systemd[1]: Stopped ignition-files.service. Dec 13 14:31:01.784706 systemd[1]: Stopping ignition-mount.service... Dec 13 14:31:01.791868 iscsid[732]: iscsid shutting down. Dec 13 14:31:01.792697 ignition[876]: INFO : Ignition 2.14.0 Dec 13 14:31:01.792697 ignition[876]: INFO : Stage: umount Dec 13 14:31:01.792697 ignition[876]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:31:01.792697 ignition[876]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:31:01.792697 ignition[876]: INFO : umount: umount passed Dec 13 14:31:01.792697 ignition[876]: INFO : Ignition finished successfully Dec 13 14:31:01.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.785893 systemd[1]: Stopping iscsid.service... Dec 13 14:31:01.787974 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:31:01.796162 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:31:01.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.796373 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:31:01.798484 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:31:01.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.798599 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:31:01.802079 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:31:01.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.802187 systemd[1]: Stopped iscsid.service. Dec 13 14:31:01.804623 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:31:01.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.805100 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:31:01.805165 systemd[1]: Stopped ignition-mount.service. Dec 13 14:31:01.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.806871 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:31:01.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.806951 systemd[1]: Closed iscsid.socket. Dec 13 14:31:01.808179 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:31:01.808217 systemd[1]: Stopped ignition-disks.service. Dec 13 14:31:01.809847 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:31:01.809877 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:31:01.811620 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:31:01.811651 systemd[1]: Stopped ignition-setup.service. Dec 13 14:31:01.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.813369 systemd[1]: Stopping iscsiuio.service... Dec 13 14:31:01.815186 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:31:01.815312 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:31:01.816902 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:31:01.816966 systemd[1]: Stopped iscsiuio.service. Dec 13 14:31:01.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.818990 systemd[1]: Stopped target network.target. Dec 13 14:31:01.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.820007 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:31:01.820033 systemd[1]: Closed iscsiuio.socket. Dec 13 14:31:01.821481 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:31:01.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.823241 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:31:01.826293 systemd-networkd[720]: eth0: DHCPv6 lease lost Dec 13 14:31:01.843000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:31:01.827484 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:31:01.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.827552 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:31:01.846000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:31:01.830370 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:31:01.830394 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:31:01.832392 systemd[1]: Stopping network-cleanup.service... Dec 13 14:31:01.833353 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:31:01.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.833390 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:31:01.835233 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:31:01.835275 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:31:01.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.836780 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:31:01.836810 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:31:01.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.837775 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:31:01.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.839838 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:31:01.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.840189 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:31:01.840269 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:31:01.844635 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:31:01.844700 systemd[1]: Stopped network-cleanup.service. Dec 13 14:31:01.848917 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:31:01.849011 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:31:01.851798 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:31:01.851854 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:31:01.852798 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:31:01.852821 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:31:01.854549 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:31:01.854587 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:31:01.856295 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:31:01.856323 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:31:01.857796 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:31:01.857825 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:31:01.860194 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:31:01.861294 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:31:01.861330 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:31:01.862307 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:31:01.862337 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:31:01.863809 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:31:01.863839 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:31:01.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.865399 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:31:01.865711 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:31:01.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:01.865767 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:31:01.885970 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:31:01.886059 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:31:01.887941 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:31:01.889573 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:31:01.897000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:31:01.897000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:31:01.889610 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:31:01.892420 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:31:01.899000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:31:01.899000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:31:01.899000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:31:01.897442 systemd[1]: Switching root. Dec 13 14:31:01.918809 systemd-journald[197]: Journal stopped Dec 13 14:31:04.378006 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Dec 13 14:31:04.378069 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:31:04.378097 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:31:04.378113 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:31:04.378129 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:31:04.378142 kernel: SELinux: policy capability open_perms=1 Dec 13 14:31:04.378155 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:31:04.378168 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:31:04.378181 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:31:04.378205 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:31:04.378218 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:31:04.378233 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:31:04.378246 systemd[1]: Successfully loaded SELinux policy in 42.064ms. Dec 13 14:31:04.378283 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.399ms. Dec 13 14:31:04.378300 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:31:04.378314 systemd[1]: Detected virtualization kvm. Dec 13 14:31:04.378329 systemd[1]: Detected architecture x86-64. Dec 13 14:31:04.378343 systemd[1]: Detected first boot. Dec 13 14:31:04.378357 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:31:04.378373 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:31:04.378386 kernel: kauditd_printk_skb: 73 callbacks suppressed Dec 13 14:31:04.378401 kernel: audit: type=1400 audit(1734100262.161:84): avc: denied { associate } for pid=926 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:31:04.378417 kernel: audit: type=1300 audit(1734100262.161:84): arch=c000003e syscall=188 success=yes exit=0 a0=c0001916c2 a1=c00002cb40 a2=c00002aa40 a3=32 items=0 ppid=909 pid=926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:04.378432 kernel: audit: type=1327 audit(1734100262.161:84): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:31:04.378451 kernel: audit: type=1400 audit(1734100262.163:85): avc: denied { associate } for pid=926 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:31:04.378466 kernel: audit: type=1300 audit(1734100262.163:85): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000191799 a2=1ed a3=0 items=2 ppid=909 pid=926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:04.378481 kernel: audit: type=1307 audit(1734100262.163:85): cwd="/" Dec 13 14:31:04.378495 kernel: audit: type=1302 audit(1734100262.163:85): item=0 name=(null) inode=2 dev=00:28 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:04.378508 kernel: audit: type=1302 audit(1734100262.163:85): item=1 name=(null) inode=3 dev=00:28 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:04.378522 kernel: audit: type=1327 audit(1734100262.163:85): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:31:04.378536 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:31:04.378556 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:31:04.378573 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:31:04.378589 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:31:04.378604 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:31:04.378618 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 14:31:04.378632 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:31:04.378649 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:31:04.378663 systemd[1]: Created slice system-getty.slice. Dec 13 14:31:04.378679 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:31:04.378694 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:31:04.378714 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:31:04.378728 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:31:04.378745 systemd[1]: Created slice user.slice. Dec 13 14:31:04.378759 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:31:04.378773 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:31:04.378787 systemd[1]: Set up automount boot.automount. Dec 13 14:31:04.378801 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:31:04.378815 systemd[1]: Reached target integritysetup.target. Dec 13 14:31:04.378829 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:31:04.378842 systemd[1]: Reached target remote-fs.target. Dec 13 14:31:04.378856 systemd[1]: Reached target slices.target. Dec 13 14:31:04.378869 systemd[1]: Reached target swap.target. Dec 13 14:31:04.378886 systemd[1]: Reached target torcx.target. Dec 13 14:31:04.378899 systemd[1]: Reached target veritysetup.target. Dec 13 14:31:04.378912 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:31:04.378924 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:31:04.378937 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:31:04.378954 kernel: audit: type=1400 audit(1734100264.257:86): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:31:04.378968 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:31:04.378981 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:31:04.379003 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:31:04.379020 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:31:04.379034 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:31:04.379049 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:31:04.379063 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:31:04.379077 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:31:04.379090 systemd[1]: Mounting media.mount... Dec 13 14:31:04.379104 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:31:04.379118 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:31:04.379132 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:31:04.379150 systemd[1]: Mounting tmp.mount... Dec 13 14:31:04.379165 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:31:04.379179 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:31:04.379193 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:31:04.379207 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:31:04.379221 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:31:04.379234 systemd[1]: Starting modprobe@drm.service... Dec 13 14:31:04.379248 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:31:04.379279 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:31:04.379296 systemd[1]: Starting modprobe@loop.service... Dec 13 14:31:04.379310 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:31:04.379324 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 14:31:04.379338 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 14:31:04.379351 systemd[1]: Starting systemd-journald.service... Dec 13 14:31:04.379364 kernel: loop: module loaded Dec 13 14:31:04.379377 kernel: fuse: init (API version 7.34) Dec 13 14:31:04.379390 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:31:04.379404 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:31:04.379419 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:31:04.379433 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:31:04.379447 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:31:04.379463 systemd-journald[1021]: Journal started Dec 13 14:31:04.379510 systemd-journald[1021]: Runtime Journal (/run/log/journal/056d794364d14707beb5296ef4c8f3ea) is 6.0M, max 48.5M, 42.5M free. Dec 13 14:31:04.257000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:31:04.257000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:31:04.376000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:31:04.376000 audit[1021]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff546d1fc0 a2=4000 a3=7fff546d205c items=0 ppid=1 pid=1021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:04.376000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:31:04.381294 systemd[1]: Started systemd-journald.service. Dec 13 14:31:04.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.383004 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:31:04.383941 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:31:04.384810 systemd[1]: Mounted media.mount. Dec 13 14:31:04.385669 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:31:04.386605 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:31:04.392521 systemd[1]: Mounted tmp.mount. Dec 13 14:31:04.393733 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:31:04.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.395042 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:31:04.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.396208 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:31:04.396454 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:31:04.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.397642 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:31:04.397860 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:31:04.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.399021 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:31:04.399233 systemd[1]: Finished modprobe@drm.service. Dec 13 14:31:04.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.400474 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:31:04.400676 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:31:04.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.401871 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:31:04.402105 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:31:04.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.403236 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:31:04.403482 systemd[1]: Finished modprobe@loop.service. Dec 13 14:31:04.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.404739 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:31:04.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.406066 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:31:04.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.407580 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:31:04.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.408958 systemd[1]: Reached target network-pre.target. Dec 13 14:31:04.418034 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:31:04.419780 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:31:04.420573 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:31:04.422323 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:31:04.424221 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:31:04.425140 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:31:04.426729 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:31:04.427918 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:31:04.429418 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:31:04.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.441438 systemd-journald[1021]: Time spent on flushing to /var/log/journal/056d794364d14707beb5296ef4c8f3ea is 12.675ms for 1025 entries. Dec 13 14:31:04.441438 systemd-journald[1021]: System Journal (/var/log/journal/056d794364d14707beb5296ef4c8f3ea) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:31:04.791452 systemd-journald[1021]: Received client request to flush runtime journal. Dec 13 14:31:04.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.432873 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:31:04.437426 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:31:04.792114 udevadm[1060]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:31:04.438526 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:31:04.439507 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:31:04.441911 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:31:04.491452 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:31:04.493646 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:31:04.494881 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:31:04.513215 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:31:04.692883 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:31:04.694077 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:31:04.792559 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:31:04.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:05.026586 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:31:05.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:05.028820 systemd[1]: Starting systemd-udevd.service... Dec 13 14:31:05.044451 systemd-udevd[1072]: Using default interface naming scheme 'v252'. Dec 13 14:31:05.056347 systemd[1]: Started systemd-udevd.service. Dec 13 14:31:05.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:05.059312 systemd[1]: Starting systemd-networkd.service... Dec 13 14:31:05.065052 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:31:05.084303 systemd[1]: Found device dev-ttyS0.device. Dec 13 14:31:05.103458 systemd[1]: Started systemd-userdbd.service. Dec 13 14:31:05.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:05.108793 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:31:05.120000 audit[1076]: AVC avc: denied { confidentiality } for pid=1076 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:31:05.136324 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 14:31:05.141271 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:31:05.156839 systemd-networkd[1085]: lo: Link UP Dec 13 14:31:05.156854 systemd-networkd[1085]: lo: Gained carrier Dec 13 14:31:05.157223 systemd-networkd[1085]: Enumeration completed Dec 13 14:31:05.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:05.157338 systemd[1]: Started systemd-networkd.service. Dec 13 14:31:05.157343 systemd-networkd[1085]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:31:05.158717 systemd-networkd[1085]: eth0: Link UP Dec 13 14:31:05.158729 systemd-networkd[1085]: eth0: Gained carrier Dec 13 14:31:05.120000 audit[1076]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555b54bd96f0 a1=337fc a2=7f30aa26cbc5 a3=5 items=110 ppid=1072 pid=1076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:05.120000 audit: CWD cwd="/" Dec 13 14:31:05.120000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=1 name=(null) inode=14495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=2 name=(null) inode=14495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=3 name=(null) inode=14496 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=4 name=(null) inode=14495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=5 name=(null) inode=14497 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=6 name=(null) inode=14495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=7 name=(null) inode=14498 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=8 name=(null) inode=14498 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=9 name=(null) inode=14499 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=10 name=(null) inode=14498 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=11 name=(null) inode=14500 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=12 name=(null) inode=14498 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=13 name=(null) inode=14501 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=14 name=(null) inode=14498 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=15 name=(null) inode=14502 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=16 name=(null) inode=14498 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=17 name=(null) inode=14503 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=18 name=(null) inode=14495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=19 name=(null) inode=14504 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=20 name=(null) inode=14504 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=21 name=(null) inode=14505 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=22 name=(null) inode=14504 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=23 name=(null) inode=14506 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=24 name=(null) inode=14504 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=25 name=(null) inode=14507 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=26 name=(null) inode=14504 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=27 name=(null) inode=14508 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=28 name=(null) inode=14504 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=29 name=(null) inode=14509 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=30 name=(null) inode=14495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=31 name=(null) inode=14510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=32 name=(null) inode=14510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=33 name=(null) inode=14511 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=34 name=(null) inode=14510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=35 name=(null) inode=14512 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=36 name=(null) inode=14510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=37 name=(null) inode=14513 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=38 name=(null) inode=14510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=39 name=(null) inode=14514 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=40 name=(null) inode=14510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=41 name=(null) inode=14515 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.175491 systemd-networkd[1085]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:31:05.120000 audit: PATH item=42 name=(null) inode=14495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=43 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=44 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=45 name=(null) inode=14517 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=46 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=47 name=(null) inode=14518 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=48 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=49 name=(null) inode=14519 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=50 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=51 name=(null) inode=14520 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=52 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=53 name=(null) inode=14521 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=55 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=56 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=57 name=(null) inode=14523 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=58 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=59 name=(null) inode=14524 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=60 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=61 name=(null) inode=14525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=62 name=(null) inode=14525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=63 name=(null) inode=14526 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=64 name=(null) inode=14525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=65 name=(null) inode=14527 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=66 name=(null) inode=14525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=67 name=(null) inode=14528 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=68 name=(null) inode=14525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=69 name=(null) inode=14529 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=70 name=(null) inode=14525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=71 name=(null) inode=14530 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=72 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=73 name=(null) inode=14531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=74 name=(null) inode=14531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=75 name=(null) inode=14532 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=76 name=(null) inode=14531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=77 name=(null) inode=14533 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=78 name=(null) inode=14531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=79 name=(null) inode=14534 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=80 name=(null) inode=14531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=81 name=(null) inode=14535 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=82 name=(null) inode=14531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=83 name=(null) inode=14536 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=84 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=85 name=(null) inode=14537 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=86 name=(null) inode=14537 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=87 name=(null) inode=14538 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=88 name=(null) inode=14537 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=89 name=(null) inode=14539 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=90 name=(null) inode=14537 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=91 name=(null) inode=14540 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=92 name=(null) inode=14537 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=93 name=(null) inode=14541 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=94 name=(null) inode=14537 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=95 name=(null) inode=14542 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=96 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=97 name=(null) inode=14543 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=98 name=(null) inode=14543 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=99 name=(null) inode=14544 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=100 name=(null) inode=14543 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=101 name=(null) inode=14545 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=102 name=(null) inode=14543 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=103 name=(null) inode=14546 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=104 name=(null) inode=14543 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=105 name=(null) inode=14547 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=106 name=(null) inode=14543 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=107 name=(null) inode=14548 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PATH item=109 name=(null) inode=14551 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.120000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:31:05.179277 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 14:31:05.184303 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:31:05.219289 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 14:31:05.219839 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 14:31:05.220008 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 14:31:05.262579 kernel: kvm: Nested Virtualization enabled Dec 13 14:31:05.262631 kernel: SVM: kvm: Nested Paging enabled Dec 13 14:31:05.262648 kernel: SVM: Virtual VMLOAD VMSAVE supported Dec 13 14:31:05.263288 kernel: SVM: Virtual GIF supported Dec 13 14:31:05.293292 kernel: EDAC MC: Ver: 3.0.0 Dec 13 14:31:05.318638 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:31:05.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:05.320721 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:31:05.327351 lvm[1108]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:31:05.355003 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:31:05.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:05.356093 systemd[1]: Reached target cryptsetup.target. Dec 13 14:31:05.358067 systemd[1]: Starting lvm2-activation.service... Dec 13 14:31:05.361180 lvm[1110]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:31:05.381977 systemd[1]: Finished lvm2-activation.service. Dec 13 14:31:05.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:05.413797 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:31:05.414670 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:31:05.414687 systemd[1]: Reached target local-fs.target. Dec 13 14:31:05.415492 systemd[1]: Reached target machines.target. Dec 13 14:31:05.417491 systemd[1]: Starting ldconfig.service... Dec 13 14:31:05.418516 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:31:05.418563 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:31:05.419488 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:31:05.421468 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:31:05.426115 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:31:05.428180 systemd[1]: Starting systemd-sysext.service... Dec 13 14:31:05.428643 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1113 (bootctl) Dec 13 14:31:05.429541 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:31:05.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:05.431543 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:31:05.436498 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:31:05.439586 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:31:05.439752 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:31:05.451280 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 14:31:05.478223 systemd-fsck[1125]: fsck.fat 4.2 (2021-01-31) Dec 13 14:31:05.478223 systemd-fsck[1125]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 14:31:05.479596 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:31:05.508507 systemd[1]: Mounting boot.mount... Dec 13 14:31:05.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:05.542832 systemd[1]: Mounted boot.mount. Dec 13 14:31:06.475847 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:31:06.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.484328 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:31:06.493993 ldconfig[1112]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:31:06.499292 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 14:31:06.505800 (sd-sysext)[1133]: Using extensions 'kubernetes'. Dec 13 14:31:06.506101 (sd-sysext)[1133]: Merged extensions into '/usr'. Dec 13 14:31:06.576124 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:31:06.577555 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:31:06.578606 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:31:06.579762 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:31:06.583212 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:31:06.585673 systemd[1]: Starting modprobe@loop.service... Dec 13 14:31:06.586716 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:31:06.586929 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:31:06.587178 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:31:06.590106 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:31:06.591431 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:31:06.591556 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:31:06.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.592865 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:31:06.592993 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:31:06.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.594387 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:31:06.594506 systemd[1]: Finished modprobe@loop.service. Dec 13 14:31:06.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.595805 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:31:06.595893 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:31:06.596899 systemd[1]: Finished systemd-sysext.service. Dec 13 14:31:06.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.598995 systemd[1]: Starting ensure-sysext.service... Dec 13 14:31:06.600742 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:31:06.605448 systemd[1]: Reloading. Dec 13 14:31:06.609392 systemd-tmpfiles[1147]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:31:06.610060 systemd-tmpfiles[1147]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:31:06.611789 systemd-tmpfiles[1147]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:31:06.652381 /usr/lib/systemd/system-generators/torcx-generator[1166]: time="2024-12-13T14:31:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:31:06.652706 /usr/lib/systemd/system-generators/torcx-generator[1166]: time="2024-12-13T14:31:06Z" level=info msg="torcx already run" Dec 13 14:31:06.732451 systemd-networkd[1085]: eth0: Gained IPv6LL Dec 13 14:31:06.809579 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:31:06.809609 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:31:06.830632 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:31:06.882067 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:31:06.883216 systemd[1]: Finished ldconfig.service. Dec 13 14:31:06.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.890621 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:31:06.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.892740 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:31:06.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.895775 systemd[1]: Starting audit-rules.service... Dec 13 14:31:06.910934 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:31:06.912905 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:31:06.915472 systemd[1]: Starting systemd-resolved.service... Dec 13 14:31:06.918752 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:31:06.921563 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:31:06.923604 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:31:06.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.923000 audit[1231]: SYSTEM_BOOT pid=1231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.929585 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:31:06.931987 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:31:06.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.936300 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:31:06.936783 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:31:06.938870 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:31:06.941313 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:31:06.943683 systemd[1]: Starting modprobe@loop.service... Dec 13 14:31:06.944818 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:31:06.945172 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:31:06.945434 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:31:06.945667 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:31:06.946904 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:31:06.947137 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:31:06.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.950728 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:31:06.951041 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:31:06.952623 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:31:06.953738 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:31:06.953885 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:31:06.954028 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:31:06.954121 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:31:06.954932 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:31:06.955105 systemd[1]: Finished modprobe@loop.service. Dec 13 14:31:06.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.957534 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:31:06.957693 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:31:06.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.959284 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:31:06.959483 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:31:06.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.961384 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:31:06.961554 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:31:06.964846 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:31:06.965314 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:31:06.967805 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:31:06.969701 systemd[1]: Starting modprobe@drm.service... Dec 13 14:31:06.971456 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:31:06.973672 systemd[1]: Starting modprobe@loop.service... Dec 13 14:31:06.974560 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:31:06.974660 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:31:06.975995 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:31:06.978151 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:31:06.978270 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:31:06.979430 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:31:06.981178 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:31:06.981400 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:31:06.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.983075 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:31:06.983273 systemd[1]: Finished modprobe@drm.service. Dec 13 14:31:06.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.984825 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:31:06.985015 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:31:06.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.986000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:31:06.986000 audit[1260]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc6b547580 a2=420 a3=0 items=0 ppid=1219 pid=1260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:06.986000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:31:06.986744 augenrules[1260]: No rules Dec 13 14:31:06.986758 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:31:06.986953 systemd[1]: Finished modprobe@loop.service. Dec 13 14:31:06.988552 systemd[1]: Finished audit-rules.service. Dec 13 14:31:06.989975 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:31:06.991965 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:31:06.992091 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:31:06.994877 systemd[1]: Starting systemd-update-done.service... Dec 13 14:31:06.996817 systemd[1]: Finished ensure-sysext.service. Dec 13 14:31:06.999774 systemd[1]: Finished systemd-update-done.service. Dec 13 14:31:07.019127 systemd-resolved[1223]: Positive Trust Anchors: Dec 13 14:31:07.019151 systemd-resolved[1223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:31:07.019178 systemd-resolved[1223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:31:07.028157 systemd-resolved[1223]: Defaulting to hostname 'linux'. Dec 13 14:31:07.029575 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:31:07.030651 systemd[1]: Started systemd-resolved.service. Dec 13 14:31:07.031028 systemd-timesyncd[1225]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 14:31:07.031064 systemd-timesyncd[1225]: Initial clock synchronization to Fri 2024-12-13 14:31:07.162842 UTC. Dec 13 14:31:07.031673 systemd[1]: Reached target network.target. Dec 13 14:31:07.032467 systemd[1]: Reached target network-online.target. Dec 13 14:31:07.033320 systemd[1]: Reached target nss-lookup.target. Dec 13 14:31:07.034139 systemd[1]: Reached target sysinit.target. Dec 13 14:31:07.035006 systemd[1]: Started motdgen.path. Dec 13 14:31:07.035725 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:31:07.036811 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:31:07.037657 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:31:07.037682 systemd[1]: Reached target paths.target. Dec 13 14:31:07.038431 systemd[1]: Reached target time-set.target. Dec 13 14:31:07.039352 systemd[1]: Started logrotate.timer. Dec 13 14:31:07.040130 systemd[1]: Started mdadm.timer. Dec 13 14:31:07.040795 systemd[1]: Reached target timers.target. Dec 13 14:31:07.041876 systemd[1]: Listening on dbus.socket. Dec 13 14:31:07.043681 systemd[1]: Starting docker.socket... Dec 13 14:31:07.045287 systemd[1]: Listening on sshd.socket. Dec 13 14:31:07.046151 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:31:07.046409 systemd[1]: Listening on docker.socket. Dec 13 14:31:07.047197 systemd[1]: Reached target sockets.target. Dec 13 14:31:07.047979 systemd[1]: Reached target basic.target. Dec 13 14:31:07.048816 systemd[1]: System is tainted: cgroupsv1 Dec 13 14:31:07.048857 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:31:07.048875 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:31:07.049853 systemd[1]: Starting containerd.service... Dec 13 14:31:07.051548 systemd[1]: Starting dbus.service... Dec 13 14:31:07.053140 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:31:07.055179 systemd[1]: Starting extend-filesystems.service... Dec 13 14:31:07.056115 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:31:07.057191 systemd[1]: Starting kubelet.service... Dec 13 14:31:07.058669 systemd[1]: Starting motdgen.service... Dec 13 14:31:07.060301 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:31:07.062424 systemd[1]: Starting sshd-keygen.service... Dec 13 14:31:07.065002 jq[1277]: false Dec 13 14:31:07.064585 systemd[1]: Starting systemd-logind.service... Dec 13 14:31:07.065344 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:31:07.065394 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:31:07.066458 systemd[1]: Starting update-engine.service... Dec 13 14:31:07.068490 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:31:07.072400 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:31:07.072675 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:31:07.073813 jq[1294]: true Dec 13 14:31:07.074201 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:31:07.074479 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:31:07.084849 jq[1302]: true Dec 13 14:31:07.086758 dbus-daemon[1276]: [system] SELinux support is enabled Dec 13 14:31:07.087149 systemd[1]: Started dbus.service. Dec 13 14:31:07.091560 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:31:07.091594 systemd[1]: Reached target system-config.target. Dec 13 14:31:07.092552 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:31:07.092570 systemd[1]: Reached target user-config.target. Dec 13 14:31:07.096064 extend-filesystems[1278]: Found loop1 Dec 13 14:31:07.096064 extend-filesystems[1278]: Found sr0 Dec 13 14:31:07.096064 extend-filesystems[1278]: Found vda Dec 13 14:31:07.096064 extend-filesystems[1278]: Found vda1 Dec 13 14:31:07.096064 extend-filesystems[1278]: Found vda2 Dec 13 14:31:07.096064 extend-filesystems[1278]: Found vda3 Dec 13 14:31:07.096064 extend-filesystems[1278]: Found usr Dec 13 14:31:07.096064 extend-filesystems[1278]: Found vda4 Dec 13 14:31:07.096064 extend-filesystems[1278]: Found vda6 Dec 13 14:31:07.096064 extend-filesystems[1278]: Found vda7 Dec 13 14:31:07.096064 extend-filesystems[1278]: Found vda9 Dec 13 14:31:07.096064 extend-filesystems[1278]: Checking size of /dev/vda9 Dec 13 14:31:07.094982 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:31:07.095194 systemd[1]: Finished motdgen.service. Dec 13 14:31:07.123155 extend-filesystems[1278]: Resized partition /dev/vda9 Dec 13 14:31:07.225927 update_engine[1292]: I1213 14:31:07.225714 1292 main.cc:92] Flatcar Update Engine starting Dec 13 14:31:07.229141 update_engine[1292]: I1213 14:31:07.229116 1292 update_check_scheduler.cc:74] Next update check in 7m7s Dec 13 14:31:07.229533 systemd[1]: Started update-engine.service. Dec 13 14:31:07.232358 systemd[1]: Started locksmithd.service. Dec 13 14:31:07.243043 extend-filesystems[1331]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:31:07.247728 env[1304]: time="2024-12-13T14:31:07.245080163Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:31:07.250852 systemd-logind[1286]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:31:07.250873 systemd-logind[1286]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:31:07.251076 systemd-logind[1286]: New seat seat0. Dec 13 14:31:07.252883 systemd[1]: Started systemd-logind.service. Dec 13 14:31:07.272758 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 14:31:07.273930 env[1304]: time="2024-12-13T14:31:07.273886205Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:31:07.274051 env[1304]: time="2024-12-13T14:31:07.274029003Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:31:07.275381 env[1304]: time="2024-12-13T14:31:07.275346363Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:31:07.275381 env[1304]: time="2024-12-13T14:31:07.275371781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:31:07.275642 env[1304]: time="2024-12-13T14:31:07.275609146Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:31:07.275642 env[1304]: time="2024-12-13T14:31:07.275633051Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:31:07.275689 env[1304]: time="2024-12-13T14:31:07.275646617Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:31:07.275689 env[1304]: time="2024-12-13T14:31:07.275657266Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:31:07.275742 env[1304]: time="2024-12-13T14:31:07.275728260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:31:07.275964 env[1304]: time="2024-12-13T14:31:07.275940858Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:31:07.276094 env[1304]: time="2024-12-13T14:31:07.276075661Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:31:07.276118 env[1304]: time="2024-12-13T14:31:07.276094426Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:31:07.276156 env[1304]: time="2024-12-13T14:31:07.276141655Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:31:07.276184 env[1304]: time="2024-12-13T14:31:07.276155992Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:31:07.393280 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 14:31:07.402701 locksmithd[1334]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:31:07.526569 extend-filesystems[1331]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:31:07.526569 extend-filesystems[1331]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:31:07.526569 extend-filesystems[1331]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 14:31:07.532048 extend-filesystems[1278]: Resized filesystem in /dev/vda9 Dec 13 14:31:07.527329 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:31:07.538118 bash[1328]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:31:07.527589 systemd[1]: Finished extend-filesystems.service. Dec 13 14:31:07.534118 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:31:07.544373 env[1304]: time="2024-12-13T14:31:07.544331497Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:31:07.544449 env[1304]: time="2024-12-13T14:31:07.544392511Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:31:07.544449 env[1304]: time="2024-12-13T14:31:07.544404904Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:31:07.544607 env[1304]: time="2024-12-13T14:31:07.544530680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:31:07.544650 env[1304]: time="2024-12-13T14:31:07.544619867Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:31:07.544650 env[1304]: time="2024-12-13T14:31:07.544643542Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:31:07.544689 env[1304]: time="2024-12-13T14:31:07.544662778Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:31:07.544689 env[1304]: time="2024-12-13T14:31:07.544682645Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:31:07.544733 env[1304]: time="2024-12-13T14:31:07.544699437Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:31:07.544733 env[1304]: time="2024-12-13T14:31:07.544718432Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:31:07.544772 env[1304]: time="2024-12-13T14:31:07.544733831Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:31:07.544772 env[1304]: time="2024-12-13T14:31:07.544751685Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:31:07.544958 env[1304]: time="2024-12-13T14:31:07.544926322Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:31:07.545051 env[1304]: time="2024-12-13T14:31:07.545026650Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:31:07.545467 env[1304]: time="2024-12-13T14:31:07.545446347Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:31:07.545509 env[1304]: time="2024-12-13T14:31:07.545485671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:31:07.545509 env[1304]: time="2024-12-13T14:31:07.545498014Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:31:07.545574 env[1304]: time="2024-12-13T14:31:07.545555462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:31:07.545574 env[1304]: time="2024-12-13T14:31:07.545570440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:31:07.545664 env[1304]: time="2024-12-13T14:31:07.545581771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:31:07.545664 env[1304]: time="2024-12-13T14:31:07.545591229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:31:07.545664 env[1304]: time="2024-12-13T14:31:07.545604314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:31:07.545664 env[1304]: time="2024-12-13T14:31:07.545619552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:31:07.545664 env[1304]: time="2024-12-13T14:31:07.545632977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:31:07.545664 env[1304]: time="2024-12-13T14:31:07.545644799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:31:07.545664 env[1304]: time="2024-12-13T14:31:07.545662362Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:31:07.545819 env[1304]: time="2024-12-13T14:31:07.545800872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:31:07.545819 env[1304]: time="2024-12-13T14:31:07.545817573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:31:07.545885 env[1304]: time="2024-12-13T14:31:07.545828324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:31:07.545885 env[1304]: time="2024-12-13T14:31:07.545838162Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:31:07.545885 env[1304]: time="2024-12-13T14:31:07.545851227Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:31:07.545885 env[1304]: time="2024-12-13T14:31:07.545862468Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:31:07.545885 env[1304]: time="2024-12-13T14:31:07.545879760Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:31:07.546031 env[1304]: time="2024-12-13T14:31:07.545923502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:31:07.546129 env[1304]: time="2024-12-13T14:31:07.546091848Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:31:07.546787 env[1304]: time="2024-12-13T14:31:07.546139377Z" level=info msg="Connect containerd service" Dec 13 14:31:07.546787 env[1304]: time="2024-12-13T14:31:07.546167921Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:31:07.546787 env[1304]: time="2024-12-13T14:31:07.546682546Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:31:07.548365 env[1304]: time="2024-12-13T14:31:07.546857053Z" level=info msg="Start subscribing containerd event" Dec 13 14:31:07.548365 env[1304]: time="2024-12-13T14:31:07.546877151Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:31:07.548365 env[1304]: time="2024-12-13T14:31:07.546904552Z" level=info msg="Start recovering state" Dec 13 14:31:07.548365 env[1304]: time="2024-12-13T14:31:07.546926593Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:31:07.548365 env[1304]: time="2024-12-13T14:31:07.546964013Z" level=info msg="containerd successfully booted in 0.302960s" Dec 13 14:31:07.547051 systemd[1]: Started containerd.service. Dec 13 14:31:07.550152 env[1304]: time="2024-12-13T14:31:07.548945419Z" level=info msg="Start event monitor" Dec 13 14:31:07.550152 env[1304]: time="2024-12-13T14:31:07.548989352Z" level=info msg="Start snapshots syncer" Dec 13 14:31:07.550152 env[1304]: time="2024-12-13T14:31:07.549017545Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:31:07.550152 env[1304]: time="2024-12-13T14:31:07.549031381Z" level=info msg="Start streaming server" Dec 13 14:31:07.616087 sshd_keygen[1305]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:31:07.634026 systemd[1]: Finished sshd-keygen.service. Dec 13 14:31:07.652229 systemd[1]: Starting issuegen.service... Dec 13 14:31:07.657303 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:31:07.657501 systemd[1]: Finished issuegen.service. Dec 13 14:31:07.659504 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:31:07.689801 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:31:07.692076 systemd[1]: Started getty@tty1.service. Dec 13 14:31:07.693996 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:31:07.695034 systemd[1]: Reached target getty.target. Dec 13 14:31:08.094778 systemd[1]: Started kubelet.service. Dec 13 14:31:08.096783 systemd[1]: Reached target multi-user.target. Dec 13 14:31:08.099250 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:31:08.106831 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:31:08.107081 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:31:08.109347 systemd[1]: Startup finished in 4.891s (kernel) + 6.152s (userspace) = 11.044s. Dec 13 14:31:08.577728 kubelet[1369]: E1213 14:31:08.577591 1369 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:31:08.579568 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:31:08.579728 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:31:15.624392 systemd[1]: Created slice system-sshd.slice. Dec 13 14:31:15.625545 systemd[1]: Started sshd@0-10.0.0.142:22-10.0.0.1:40140.service. Dec 13 14:31:15.668752 sshd[1380]: Accepted publickey for core from 10.0.0.1 port 40140 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:31:15.670202 sshd[1380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:15.678534 systemd-logind[1286]: New session 1 of user core. Dec 13 14:31:15.679279 systemd[1]: Created slice user-500.slice. Dec 13 14:31:15.680077 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:31:15.687899 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:31:15.689028 systemd[1]: Starting user@500.service... Dec 13 14:31:15.691755 (systemd)[1385]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:15.758461 systemd[1385]: Queued start job for default target default.target. Dec 13 14:31:15.758653 systemd[1385]: Reached target paths.target. Dec 13 14:31:15.758667 systemd[1385]: Reached target sockets.target. Dec 13 14:31:15.758678 systemd[1385]: Reached target timers.target. Dec 13 14:31:15.758689 systemd[1385]: Reached target basic.target. Dec 13 14:31:15.758724 systemd[1385]: Reached target default.target. Dec 13 14:31:15.758745 systemd[1385]: Startup finished in 61ms. Dec 13 14:31:15.758932 systemd[1]: Started user@500.service. Dec 13 14:31:15.759939 systemd[1]: Started session-1.scope. Dec 13 14:31:15.810009 systemd[1]: Started sshd@1-10.0.0.142:22-10.0.0.1:40148.service. Dec 13 14:31:15.849066 sshd[1394]: Accepted publickey for core from 10.0.0.1 port 40148 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:31:15.850068 sshd[1394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:15.854417 systemd-logind[1286]: New session 2 of user core. Dec 13 14:31:15.855100 systemd[1]: Started session-2.scope. Dec 13 14:31:15.906968 sshd[1394]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:15.909701 systemd[1]: Started sshd@2-10.0.0.142:22-10.0.0.1:40158.service. Dec 13 14:31:15.910192 systemd[1]: sshd@1-10.0.0.142:22-10.0.0.1:40148.service: Deactivated successfully. Dec 13 14:31:15.911040 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:31:15.911062 systemd-logind[1286]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:31:15.911919 systemd-logind[1286]: Removed session 2. Dec 13 14:31:15.949351 sshd[1400]: Accepted publickey for core from 10.0.0.1 port 40158 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:31:15.950381 sshd[1400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:15.953838 systemd-logind[1286]: New session 3 of user core. Dec 13 14:31:15.954613 systemd[1]: Started session-3.scope. Dec 13 14:31:16.003576 sshd[1400]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:16.005795 systemd[1]: Started sshd@3-10.0.0.142:22-10.0.0.1:40160.service. Dec 13 14:31:16.006331 systemd[1]: sshd@2-10.0.0.142:22-10.0.0.1:40158.service: Deactivated successfully. Dec 13 14:31:16.007107 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:31:16.007146 systemd-logind[1286]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:31:16.007915 systemd-logind[1286]: Removed session 3. Dec 13 14:31:16.047242 sshd[1406]: Accepted publickey for core from 10.0.0.1 port 40160 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:31:16.048307 sshd[1406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:16.051685 systemd-logind[1286]: New session 4 of user core. Dec 13 14:31:16.052404 systemd[1]: Started session-4.scope. Dec 13 14:31:16.105044 sshd[1406]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:16.107376 systemd[1]: Started sshd@4-10.0.0.142:22-10.0.0.1:40168.service. Dec 13 14:31:16.107751 systemd[1]: sshd@3-10.0.0.142:22-10.0.0.1:40160.service: Deactivated successfully. Dec 13 14:31:16.108507 systemd-logind[1286]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:31:16.108535 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:31:16.109416 systemd-logind[1286]: Removed session 4. Dec 13 14:31:16.146565 sshd[1414]: Accepted publickey for core from 10.0.0.1 port 40168 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:31:16.147482 sshd[1414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:16.150678 systemd-logind[1286]: New session 5 of user core. Dec 13 14:31:16.151404 systemd[1]: Started session-5.scope. Dec 13 14:31:16.206664 sudo[1419]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:31:16.206842 sudo[1419]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:31:16.218117 systemd[1]: Starting coreos-metadata.service... Dec 13 14:31:16.224529 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 14:31:16.224714 systemd[1]: Finished coreos-metadata.service. Dec 13 14:31:16.643561 systemd[1]: Stopped kubelet.service. Dec 13 14:31:16.645494 systemd[1]: Starting kubelet.service... Dec 13 14:31:16.666146 systemd[1]: Reloading. Dec 13 14:31:16.729356 /usr/lib/systemd/system-generators/torcx-generator[1486]: time="2024-12-13T14:31:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:31:16.729380 /usr/lib/systemd/system-generators/torcx-generator[1486]: time="2024-12-13T14:31:16Z" level=info msg="torcx already run" Dec 13 14:31:16.903372 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:31:16.903392 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:31:16.922561 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:31:16.986514 systemd[1]: Started kubelet.service. Dec 13 14:31:16.989686 systemd[1]: Stopping kubelet.service... Dec 13 14:31:16.990441 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:31:16.990654 systemd[1]: Stopped kubelet.service. Dec 13 14:31:16.992019 systemd[1]: Starting kubelet.service... Dec 13 14:31:17.061467 systemd[1]: Started kubelet.service. Dec 13 14:31:17.103251 kubelet[1550]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:31:17.103251 kubelet[1550]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:31:17.103251 kubelet[1550]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:31:17.104245 kubelet[1550]: I1213 14:31:17.104186 1550 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:31:17.349087 kubelet[1550]: I1213 14:31:17.348940 1550 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:31:17.349087 kubelet[1550]: I1213 14:31:17.348978 1550 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:31:17.349254 kubelet[1550]: I1213 14:31:17.349234 1550 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:31:17.373467 kubelet[1550]: I1213 14:31:17.373422 1550 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:31:17.400857 kubelet[1550]: I1213 14:31:17.400813 1550 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:31:17.401920 kubelet[1550]: I1213 14:31:17.401899 1550 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:31:17.402087 kubelet[1550]: I1213 14:31:17.402051 1550 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:31:17.402087 kubelet[1550]: I1213 14:31:17.402075 1550 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:31:17.402087 kubelet[1550]: I1213 14:31:17.402083 1550 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:31:17.402288 kubelet[1550]: I1213 14:31:17.402169 1550 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:31:17.402288 kubelet[1550]: I1213 14:31:17.402239 1550 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:31:17.402288 kubelet[1550]: I1213 14:31:17.402250 1550 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:31:17.402358 kubelet[1550]: I1213 14:31:17.402298 1550 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:31:17.402358 kubelet[1550]: I1213 14:31:17.402308 1550 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:31:17.402474 kubelet[1550]: E1213 14:31:17.402415 1550 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:17.402474 kubelet[1550]: E1213 14:31:17.402477 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:17.405429 kubelet[1550]: I1213 14:31:17.405409 1550 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:31:17.408921 kubelet[1550]: W1213 14:31:17.408902 1550 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:31:17.408921 kubelet[1550]: W1213 14:31:17.408912 1550 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.0.0.142" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:31:17.408993 kubelet[1550]: E1213 14:31:17.408932 1550 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:31:17.408993 kubelet[1550]: E1213 14:31:17.408941 1550 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.142" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:31:17.409528 kubelet[1550]: I1213 14:31:17.409506 1550 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:31:17.409593 kubelet[1550]: W1213 14:31:17.409550 1550 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:31:17.410072 kubelet[1550]: I1213 14:31:17.410047 1550 server.go:1256] "Started kubelet" Dec 13 14:31:17.410116 kubelet[1550]: I1213 14:31:17.410104 1550 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:31:17.410458 kubelet[1550]: I1213 14:31:17.410445 1550 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:31:17.410794 kubelet[1550]: I1213 14:31:17.410781 1550 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:31:17.411148 kubelet[1550]: I1213 14:31:17.411122 1550 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:31:17.412698 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:31:17.412823 kubelet[1550]: I1213 14:31:17.412798 1550 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:31:17.415592 kubelet[1550]: E1213 14:31:17.415571 1550 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" Dec 13 14:31:17.415638 kubelet[1550]: I1213 14:31:17.415604 1550 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:31:17.415773 kubelet[1550]: I1213 14:31:17.415743 1550 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:31:17.415826 kubelet[1550]: I1213 14:31:17.415813 1550 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:31:17.421294 kubelet[1550]: E1213 14:31:17.421247 1550 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:31:17.423164 kubelet[1550]: I1213 14:31:17.423148 1550 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:31:17.423247 kubelet[1550]: I1213 14:31:17.423233 1550 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:31:17.423574 kubelet[1550]: I1213 14:31:17.423459 1550 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:31:17.430084 kubelet[1550]: E1213 14:31:17.430069 1550 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.142.1810c303bfee4001 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.142,UID:10.0.0.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.142,},FirstTimestamp:2024-12-13 14:31:17.410021377 +0000 UTC m=+0.343497325,LastTimestamp:2024-12-13 14:31:17.410021377 +0000 UTC m=+0.343497325,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.142,}" Dec 13 14:31:17.432343 kubelet[1550]: E1213 14:31:17.432331 1550 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.142\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 14:31:17.435922 kubelet[1550]: W1213 14:31:17.435893 1550 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 14:31:17.435967 kubelet[1550]: E1213 14:31:17.435928 1550 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 14:31:17.436156 kubelet[1550]: E1213 14:31:17.436134 1550 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.142.1810c303c0993f74 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.142,UID:10.0.0.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.142,},FirstTimestamp:2024-12-13 14:31:17.421227892 +0000 UTC m=+0.354703840,LastTimestamp:2024-12-13 14:31:17.421227892 +0000 UTC m=+0.354703840,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.142,}" Dec 13 14:31:17.444730 kubelet[1550]: I1213 14:31:17.444704 1550 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:31:17.444786 kubelet[1550]: I1213 14:31:17.444753 1550 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:31:17.444786 kubelet[1550]: I1213 14:31:17.444783 1550 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:31:17.445006 kubelet[1550]: E1213 14:31:17.444992 1550 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.142.1810c303c1f3a75c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.142,UID:10.0.0.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.142 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.142,},FirstTimestamp:2024-12-13 14:31:17.443929948 +0000 UTC m=+0.377405896,LastTimestamp:2024-12-13 14:31:17.443929948 +0000 UTC m=+0.377405896,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.142,}" Dec 13 14:31:17.449562 kubelet[1550]: E1213 14:31:17.449534 1550 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.142.1810c303c1f3c572 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.142,UID:10.0.0.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.142 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.142,},FirstTimestamp:2024-12-13 14:31:17.44393765 +0000 UTC m=+0.377413598,LastTimestamp:2024-12-13 14:31:17.44393765 +0000 UTC m=+0.377413598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.142,}" Dec 13 14:31:17.453139 kubelet[1550]: E1213 14:31:17.453115 1550 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.142.1810c303c1f3daff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.142,UID:10.0.0.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.142 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.142,},FirstTimestamp:2024-12-13 14:31:17.443943167 +0000 UTC m=+0.377419117,LastTimestamp:2024-12-13 14:31:17.443943167 +0000 UTC m=+0.377419117,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.142,}" Dec 13 14:31:17.517131 kubelet[1550]: I1213 14:31:17.517096 1550 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.142" Dec 13 14:31:18.087535 kubelet[1550]: I1213 14:31:18.087463 1550 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.142" Dec 13 14:31:18.088822 kubelet[1550]: I1213 14:31:18.088794 1550 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:31:18.089310 env[1304]: time="2024-12-13T14:31:18.089245401Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:31:18.089817 kubelet[1550]: I1213 14:31:18.089793 1550 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:31:18.128816 kubelet[1550]: I1213 14:31:18.128778 1550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:31:18.128816 kubelet[1550]: I1213 14:31:18.128796 1550 policy_none.go:49] "None policy: Start" Dec 13 14:31:18.129672 kubelet[1550]: I1213 14:31:18.129656 1550 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:31:18.129758 kubelet[1550]: I1213 14:31:18.129737 1550 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:31:18.130295 kubelet[1550]: I1213 14:31:18.130278 1550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:31:18.130340 kubelet[1550]: I1213 14:31:18.130309 1550 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:31:18.130366 kubelet[1550]: I1213 14:31:18.130356 1550 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:31:18.130429 kubelet[1550]: E1213 14:31:18.130415 1550 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:31:18.227368 kubelet[1550]: I1213 14:31:18.227320 1550 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:31:18.227581 kubelet[1550]: I1213 14:31:18.227565 1550 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:31:18.233405 kubelet[1550]: E1213 14:31:18.233391 1550 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.142\" not found" Dec 13 14:31:18.240772 kubelet[1550]: E1213 14:31:18.240718 1550 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" Dec 13 14:31:18.341305 kubelet[1550]: E1213 14:31:18.341144 1550 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" Dec 13 14:31:18.351352 kubelet[1550]: I1213 14:31:18.351321 1550 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:31:18.351500 kubelet[1550]: W1213 14:31:18.351470 1550 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:31:18.351622 kubelet[1550]: E1213 14:31:18.351552 1550 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.135:6443/api/v1/namespaces/default/events\": read tcp 10.0.0.142:51562->10.0.0.135:6443: use of closed network connection" event="&Event{ObjectMeta:{10.0.0.142.1810c303c1f3daff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.142,UID:10.0.0.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.142 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.142,},FirstTimestamp:2024-12-13 14:31:17.443943167 +0000 UTC m=+0.377419117,LastTimestamp:2024-12-13 14:31:17.517061425 +0000 UTC m=+0.450537373,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.142,}" Dec 13 14:31:18.402995 kubelet[1550]: I1213 14:31:18.402968 1550 apiserver.go:52] "Watching apiserver" Dec 13 14:31:18.402995 kubelet[1550]: E1213 14:31:18.402991 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:18.405003 kubelet[1550]: I1213 14:31:18.404981 1550 topology_manager.go:215] "Topology Admit Handler" podUID="04273870-b99d-4e76-8c46-82ae0dfdfa26" podNamespace="kube-system" podName="cilium-7747k" Dec 13 14:31:18.405096 kubelet[1550]: I1213 14:31:18.405079 1550 topology_manager.go:215] "Topology Admit Handler" podUID="f81baa92-d67b-456b-aee4-f703991481d2" podNamespace="kube-system" podName="kube-proxy-fx52q" Dec 13 14:31:18.416735 kubelet[1550]: I1213 14:31:18.416705 1550 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:31:18.424566 kubelet[1550]: I1213 14:31:18.424547 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04273870-b99d-4e76-8c46-82ae0dfdfa26-clustermesh-secrets\") pod \"cilium-7747k\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " pod="kube-system/cilium-7747k" Dec 13 14:31:18.424631 kubelet[1550]: I1213 14:31:18.424573 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f81baa92-d67b-456b-aee4-f703991481d2-lib-modules\") pod \"kube-proxy-fx52q\" (UID: \"f81baa92-d67b-456b-aee4-f703991481d2\") " pod="kube-system/kube-proxy-fx52q" Dec 13 14:31:18.424631 kubelet[1550]: I1213 14:31:18.424590 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-cilium-run\") pod \"cilium-7747k\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " pod="kube-system/cilium-7747k" Dec 13 14:31:18.424631 kubelet[1550]: I1213 14:31:18.424607 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-cilium-cgroup\") pod \"cilium-7747k\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " pod="kube-system/cilium-7747k" Dec 13 14:31:18.424631 kubelet[1550]: I1213 14:31:18.424625 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-host-proc-sys-net\") pod \"cilium-7747k\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " pod="kube-system/cilium-7747k" Dec 13 14:31:18.424763 kubelet[1550]: I1213 14:31:18.424641 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-hostproc\") pod \"cilium-7747k\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " pod="kube-system/cilium-7747k" Dec 13 14:31:18.424763 kubelet[1550]: I1213 14:31:18.424686 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-lib-modules\") pod \"cilium-7747k\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " pod="kube-system/cilium-7747k" Dec 13 14:31:18.424763 kubelet[1550]: I1213 14:31:18.424720 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04273870-b99d-4e76-8c46-82ae0dfdfa26-cilium-config-path\") pod \"cilium-7747k\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " pod="kube-system/cilium-7747k" Dec 13 14:31:18.424841 kubelet[1550]: I1213 14:31:18.424766 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-host-proc-sys-kernel\") pod \"cilium-7747k\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " pod="kube-system/cilium-7747k" Dec 13 14:31:18.424841 kubelet[1550]: I1213 14:31:18.424789 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04273870-b99d-4e76-8c46-82ae0dfdfa26-hubble-tls\") pod \"cilium-7747k\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " pod="kube-system/cilium-7747k" Dec 13 14:31:18.424841 kubelet[1550]: I1213 14:31:18.424818 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlw7n\" (UniqueName: \"kubernetes.io/projected/04273870-b99d-4e76-8c46-82ae0dfdfa26-kube-api-access-zlw7n\") pod \"cilium-7747k\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " pod="kube-system/cilium-7747k" Dec 13 14:31:18.424841 kubelet[1550]: I1213 14:31:18.424840 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f81baa92-d67b-456b-aee4-f703991481d2-xtables-lock\") pod \"kube-proxy-fx52q\" (UID: \"f81baa92-d67b-456b-aee4-f703991481d2\") " pod="kube-system/kube-proxy-fx52q" Dec 13 14:31:18.424964 kubelet[1550]: I1213 14:31:18.424860 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-bpf-maps\") pod \"cilium-7747k\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " pod="kube-system/cilium-7747k" Dec 13 14:31:18.424964 kubelet[1550]: I1213 14:31:18.424900 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-cni-path\") pod \"cilium-7747k\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " pod="kube-system/cilium-7747k" Dec 13 14:31:18.424964 kubelet[1550]: I1213 14:31:18.424929 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f81baa92-d67b-456b-aee4-f703991481d2-kube-proxy\") pod \"kube-proxy-fx52q\" (UID: \"f81baa92-d67b-456b-aee4-f703991481d2\") " pod="kube-system/kube-proxy-fx52q" Dec 13 14:31:18.425202 kubelet[1550]: I1213 14:31:18.424968 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktvj9\" (UniqueName: \"kubernetes.io/projected/f81baa92-d67b-456b-aee4-f703991481d2-kube-api-access-ktvj9\") pod \"kube-proxy-fx52q\" (UID: \"f81baa92-d67b-456b-aee4-f703991481d2\") " pod="kube-system/kube-proxy-fx52q" Dec 13 14:31:18.425350 kubelet[1550]: I1213 14:31:18.425329 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-etc-cni-netd\") pod \"cilium-7747k\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " pod="kube-system/cilium-7747k" Dec 13 14:31:18.425405 kubelet[1550]: I1213 14:31:18.425396 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-xtables-lock\") pod \"cilium-7747k\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " pod="kube-system/cilium-7747k" Dec 13 14:31:18.678569 sudo[1419]: pam_unix(sudo:session): session closed for user root Dec 13 14:31:18.679955 sshd[1414]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:18.681972 systemd[1]: sshd@4-10.0.0.142:22-10.0.0.1:40168.service: Deactivated successfully. Dec 13 14:31:18.682888 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:31:18.683342 systemd-logind[1286]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:31:18.683953 systemd-logind[1286]: Removed session 5. Dec 13 14:31:18.708215 kubelet[1550]: E1213 14:31:18.708187 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:31:18.708304 kubelet[1550]: E1213 14:31:18.708221 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:31:18.708861 env[1304]: time="2024-12-13T14:31:18.708828874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7747k,Uid:04273870-b99d-4e76-8c46-82ae0dfdfa26,Namespace:kube-system,Attempt:0,}" Dec 13 14:31:18.708924 env[1304]: time="2024-12-13T14:31:18.708889096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fx52q,Uid:f81baa92-d67b-456b-aee4-f703991481d2,Namespace:kube-system,Attempt:0,}" Dec 13 14:31:19.403958 kubelet[1550]: E1213 14:31:19.403918 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:19.938606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1559631861.mount: Deactivated successfully. Dec 13 14:31:19.944393 env[1304]: time="2024-12-13T14:31:19.944349252Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:19.947040 env[1304]: time="2024-12-13T14:31:19.946997711Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:19.948539 env[1304]: time="2024-12-13T14:31:19.948503174Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:19.949881 env[1304]: time="2024-12-13T14:31:19.949846713Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:19.951099 env[1304]: time="2024-12-13T14:31:19.951051387Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:19.952340 env[1304]: time="2024-12-13T14:31:19.952310458Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:19.953678 env[1304]: time="2024-12-13T14:31:19.953639404Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:19.954989 env[1304]: time="2024-12-13T14:31:19.954957821Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:19.979920 env[1304]: time="2024-12-13T14:31:19.979850389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:19.979920 env[1304]: time="2024-12-13T14:31:19.979903650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:19.979920 env[1304]: time="2024-12-13T14:31:19.979918514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:19.980144 env[1304]: time="2024-12-13T14:31:19.980103177Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/75017e27c1495938c4d1be36dec7821f90245cb17f922094adb10d421adb173a pid=1608 runtime=io.containerd.runc.v2 Dec 13 14:31:19.985343 env[1304]: time="2024-12-13T14:31:19.985283607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:19.985343 env[1304]: time="2024-12-13T14:31:19.985319580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:19.985343 env[1304]: time="2024-12-13T14:31:19.985330280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:19.985523 env[1304]: time="2024-12-13T14:31:19.985491009Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a89693f4eb910abb11c7a264ad8185ef5cebcf9a0489d143fa772919c9f03120 pid=1626 runtime=io.containerd.runc.v2 Dec 13 14:31:20.104706 env[1304]: time="2024-12-13T14:31:20.104660923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fx52q,Uid:f81baa92-d67b-456b-aee4-f703991481d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"75017e27c1495938c4d1be36dec7821f90245cb17f922094adb10d421adb173a\"" Dec 13 14:31:20.105587 kubelet[1550]: E1213 14:31:20.105566 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:31:20.106531 env[1304]: time="2024-12-13T14:31:20.106505162Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:31:20.113130 env[1304]: time="2024-12-13T14:31:20.113095609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7747k,Uid:04273870-b99d-4e76-8c46-82ae0dfdfa26,Namespace:kube-system,Attempt:0,} returns sandbox id \"a89693f4eb910abb11c7a264ad8185ef5cebcf9a0489d143fa772919c9f03120\"" Dec 13 14:31:20.113850 kubelet[1550]: E1213 14:31:20.113824 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:31:20.404333 kubelet[1550]: E1213 14:31:20.404198 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:21.332276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3719412892.mount: Deactivated successfully. Dec 13 14:31:21.404442 kubelet[1550]: E1213 14:31:21.404384 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:22.112740 env[1304]: time="2024-12-13T14:31:22.112678811Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:22.114334 env[1304]: time="2024-12-13T14:31:22.114281221Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:22.115605 env[1304]: time="2024-12-13T14:31:22.115555516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:22.117341 env[1304]: time="2024-12-13T14:31:22.117307957Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:22.117809 env[1304]: time="2024-12-13T14:31:22.117767351Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 14:31:22.118499 env[1304]: time="2024-12-13T14:31:22.118439493Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:31:22.120049 env[1304]: time="2024-12-13T14:31:22.120011509Z" level=info msg="CreateContainer within sandbox \"75017e27c1495938c4d1be36dec7821f90245cb17f922094adb10d421adb173a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:31:22.133226 env[1304]: time="2024-12-13T14:31:22.133168589Z" level=info msg="CreateContainer within sandbox \"75017e27c1495938c4d1be36dec7821f90245cb17f922094adb10d421adb173a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"19218d96f6309a8dfe3c783ca1b2f54e683621af7ad1718106bed66c14753b03\"" Dec 13 14:31:22.133789 env[1304]: time="2024-12-13T14:31:22.133736241Z" level=info msg="StartContainer for \"19218d96f6309a8dfe3c783ca1b2f54e683621af7ad1718106bed66c14753b03\"" Dec 13 14:31:22.268775 env[1304]: time="2024-12-13T14:31:22.268719409Z" level=info msg="StartContainer for \"19218d96f6309a8dfe3c783ca1b2f54e683621af7ad1718106bed66c14753b03\" returns successfully" Dec 13 14:31:22.405278 kubelet[1550]: E1213 14:31:22.405128 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:23.142122 kubelet[1550]: E1213 14:31:23.142095 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:31:23.251967 kubelet[1550]: I1213 14:31:23.251930 1550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fx52q" podStartSLOduration=4.239835006 podStartE2EDuration="6.251878109s" podCreationTimestamp="2024-12-13 14:31:17 +0000 UTC" firstStartedPulling="2024-12-13 14:31:20.106123849 +0000 UTC m=+3.039599797" lastFinishedPulling="2024-12-13 14:31:22.118166942 +0000 UTC m=+5.051642900" observedRunningTime="2024-12-13 14:31:23.251139327 +0000 UTC m=+6.184615275" watchObservedRunningTime="2024-12-13 14:31:23.251878109 +0000 UTC m=+6.185354057" Dec 13 14:31:23.405916 kubelet[1550]: E1213 14:31:23.405808 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:24.143192 kubelet[1550]: E1213 14:31:24.143140 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:31:24.406872 kubelet[1550]: E1213 14:31:24.406720 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:25.406882 kubelet[1550]: E1213 14:31:25.406839 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:26.407808 kubelet[1550]: E1213 14:31:26.407769 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:27.408723 kubelet[1550]: E1213 14:31:27.408670 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:28.409748 kubelet[1550]: E1213 14:31:28.409703 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:28.731959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount672592577.mount: Deactivated successfully. Dec 13 14:31:29.410318 kubelet[1550]: E1213 14:31:29.410288 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:30.410797 kubelet[1550]: E1213 14:31:30.410736 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:31.411786 kubelet[1550]: E1213 14:31:31.411726 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:32.412866 kubelet[1550]: E1213 14:31:32.412827 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:32.564338 env[1304]: time="2024-12-13T14:31:32.564283781Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:32.566111 env[1304]: time="2024-12-13T14:31:32.566067025Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:32.567547 env[1304]: time="2024-12-13T14:31:32.567500510Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:32.567953 env[1304]: time="2024-12-13T14:31:32.567920327Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:31:32.569494 env[1304]: time="2024-12-13T14:31:32.569459480Z" level=info msg="CreateContainer within sandbox \"a89693f4eb910abb11c7a264ad8185ef5cebcf9a0489d143fa772919c9f03120\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:31:32.580793 env[1304]: time="2024-12-13T14:31:32.580756972Z" level=info msg="CreateContainer within sandbox \"a89693f4eb910abb11c7a264ad8185ef5cebcf9a0489d143fa772919c9f03120\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9d6a2589d42ed1c891e9ec141a8c7120989f5ca22674faebb7673fa06aa2d80d\"" Dec 13 14:31:32.581158 env[1304]: time="2024-12-13T14:31:32.581134142Z" level=info msg="StartContainer for \"9d6a2589d42ed1c891e9ec141a8c7120989f5ca22674faebb7673fa06aa2d80d\"" Dec 13 14:31:32.660184 env[1304]: time="2024-12-13T14:31:32.660131221Z" level=info msg="StartContainer for \"9d6a2589d42ed1c891e9ec141a8c7120989f5ca22674faebb7673fa06aa2d80d\" returns successfully" Dec 13 14:31:33.161184 kubelet[1550]: E1213 14:31:33.161132 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:31:33.222693 env[1304]: time="2024-12-13T14:31:33.222635045Z" level=info msg="shim disconnected" id=9d6a2589d42ed1c891e9ec141a8c7120989f5ca22674faebb7673fa06aa2d80d Dec 13 14:31:33.222693 env[1304]: time="2024-12-13T14:31:33.222688948Z" level=warning msg="cleaning up after shim disconnected" id=9d6a2589d42ed1c891e9ec141a8c7120989f5ca22674faebb7673fa06aa2d80d namespace=k8s.io Dec 13 14:31:33.222693 env[1304]: time="2024-12-13T14:31:33.222698712Z" level=info msg="cleaning up dead shim" Dec 13 14:31:33.228917 env[1304]: time="2024-12-13T14:31:33.228877456Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1895 runtime=io.containerd.runc.v2\n" Dec 13 14:31:33.413089 kubelet[1550]: E1213 14:31:33.412968 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:33.576394 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d6a2589d42ed1c891e9ec141a8c7120989f5ca22674faebb7673fa06aa2d80d-rootfs.mount: Deactivated successfully. Dec 13 14:31:34.163761 kubelet[1550]: E1213 14:31:34.163737 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:31:34.165513 env[1304]: time="2024-12-13T14:31:34.165451976Z" level=info msg="CreateContainer within sandbox \"a89693f4eb910abb11c7a264ad8185ef5cebcf9a0489d143fa772919c9f03120\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:31:34.185037 env[1304]: time="2024-12-13T14:31:34.184982638Z" level=info msg="CreateContainer within sandbox \"a89693f4eb910abb11c7a264ad8185ef5cebcf9a0489d143fa772919c9f03120\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d04dab69a2053eb1afe6459dfde54f97ca2a0983bcea2ba73dcff48ddb988ec7\"" Dec 13 14:31:34.185601 env[1304]: time="2024-12-13T14:31:34.185560848Z" level=info msg="StartContainer for \"d04dab69a2053eb1afe6459dfde54f97ca2a0983bcea2ba73dcff48ddb988ec7\"" Dec 13 14:31:34.266126 env[1304]: time="2024-12-13T14:31:34.266081990Z" level=info msg="StartContainer for \"d04dab69a2053eb1afe6459dfde54f97ca2a0983bcea2ba73dcff48ddb988ec7\" returns successfully" Dec 13 14:31:34.274420 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:31:34.274666 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:31:34.274834 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:31:34.276276 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:31:34.284055 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:31:34.296885 env[1304]: time="2024-12-13T14:31:34.296836384Z" level=info msg="shim disconnected" id=d04dab69a2053eb1afe6459dfde54f97ca2a0983bcea2ba73dcff48ddb988ec7 Dec 13 14:31:34.296885 env[1304]: time="2024-12-13T14:31:34.296882594Z" level=warning msg="cleaning up after shim disconnected" id=d04dab69a2053eb1afe6459dfde54f97ca2a0983bcea2ba73dcff48ddb988ec7 namespace=k8s.io Dec 13 14:31:34.297021 env[1304]: time="2024-12-13T14:31:34.296891245Z" level=info msg="cleaning up dead shim" Dec 13 14:31:34.302344 env[1304]: time="2024-12-13T14:31:34.302310676Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1962 runtime=io.containerd.runc.v2\n" Dec 13 14:31:34.414237 kubelet[1550]: E1213 14:31:34.414067 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:34.576581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d04dab69a2053eb1afe6459dfde54f97ca2a0983bcea2ba73dcff48ddb988ec7-rootfs.mount: Deactivated successfully. Dec 13 14:31:35.166323 kubelet[1550]: E1213 14:31:35.166296 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:31:35.167976 env[1304]: time="2024-12-13T14:31:35.167915886Z" level=info msg="CreateContainer within sandbox \"a89693f4eb910abb11c7a264ad8185ef5cebcf9a0489d143fa772919c9f03120\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:31:35.183715 env[1304]: time="2024-12-13T14:31:35.183663216Z" level=info msg="CreateContainer within sandbox \"a89693f4eb910abb11c7a264ad8185ef5cebcf9a0489d143fa772919c9f03120\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7dee5ce13d8e27031fbe4722bf96f4294ef4ffa1e3f6db6518b921bb69a6fc5f\"" Dec 13 14:31:35.184251 env[1304]: time="2024-12-13T14:31:35.184216260Z" level=info msg="StartContainer for \"7dee5ce13d8e27031fbe4722bf96f4294ef4ffa1e3f6db6518b921bb69a6fc5f\"" Dec 13 14:31:35.237383 env[1304]: time="2024-12-13T14:31:35.237339297Z" level=info msg="StartContainer for \"7dee5ce13d8e27031fbe4722bf96f4294ef4ffa1e3f6db6518b921bb69a6fc5f\" returns successfully" Dec 13 14:31:35.256185 env[1304]: time="2024-12-13T14:31:35.256141301Z" level=info msg="shim disconnected" id=7dee5ce13d8e27031fbe4722bf96f4294ef4ffa1e3f6db6518b921bb69a6fc5f Dec 13 14:31:35.256424 env[1304]: time="2024-12-13T14:31:35.256388035Z" level=warning msg="cleaning up after shim disconnected" id=7dee5ce13d8e27031fbe4722bf96f4294ef4ffa1e3f6db6518b921bb69a6fc5f namespace=k8s.io Dec 13 14:31:35.256424 env[1304]: time="2024-12-13T14:31:35.256407429Z" level=info msg="cleaning up dead shim" Dec 13 14:31:35.264277 env[1304]: time="2024-12-13T14:31:35.264201971Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2019 runtime=io.containerd.runc.v2\n" Dec 13 14:31:35.414710 kubelet[1550]: E1213 14:31:35.414674 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:35.576799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7dee5ce13d8e27031fbe4722bf96f4294ef4ffa1e3f6db6518b921bb69a6fc5f-rootfs.mount: Deactivated successfully. Dec 13 14:31:36.169019 kubelet[1550]: E1213 14:31:36.168993 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:31:36.171034 env[1304]: time="2024-12-13T14:31:36.170985250Z" level=info msg="CreateContainer within sandbox \"a89693f4eb910abb11c7a264ad8185ef5cebcf9a0489d143fa772919c9f03120\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:31:36.185895 env[1304]: time="2024-12-13T14:31:36.185845991Z" level=info msg="CreateContainer within sandbox \"a89693f4eb910abb11c7a264ad8185ef5cebcf9a0489d143fa772919c9f03120\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"434fb43126bcffb5a2b618e8e74ebffbcd1546fed9e7adfe17a2ebaf1decb979\"" Dec 13 14:31:36.186391 env[1304]: time="2024-12-13T14:31:36.186350495Z" level=info msg="StartContainer for \"434fb43126bcffb5a2b618e8e74ebffbcd1546fed9e7adfe17a2ebaf1decb979\"" Dec 13 14:31:36.236934 env[1304]: time="2024-12-13T14:31:36.236885580Z" level=info msg="StartContainer for \"434fb43126bcffb5a2b618e8e74ebffbcd1546fed9e7adfe17a2ebaf1decb979\" returns successfully" Dec 13 14:31:36.253039 env[1304]: time="2024-12-13T14:31:36.252982062Z" level=info msg="shim disconnected" id=434fb43126bcffb5a2b618e8e74ebffbcd1546fed9e7adfe17a2ebaf1decb979 Dec 13 14:31:36.253039 env[1304]: time="2024-12-13T14:31:36.253031524Z" level=warning msg="cleaning up after shim disconnected" id=434fb43126bcffb5a2b618e8e74ebffbcd1546fed9e7adfe17a2ebaf1decb979 namespace=k8s.io Dec 13 14:31:36.253039 env[1304]: time="2024-12-13T14:31:36.253041196Z" level=info msg="cleaning up dead shim" Dec 13 14:31:36.259738 env[1304]: time="2024-12-13T14:31:36.259714751Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2074 runtime=io.containerd.runc.v2\n" Dec 13 14:31:36.415069 kubelet[1550]: E1213 14:31:36.415013 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:36.576664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-434fb43126bcffb5a2b618e8e74ebffbcd1546fed9e7adfe17a2ebaf1decb979-rootfs.mount: Deactivated successfully. Dec 13 14:31:37.172399 kubelet[1550]: E1213 14:31:37.172358 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:31:37.174176 env[1304]: time="2024-12-13T14:31:37.174127682Z" level=info msg="CreateContainer within sandbox \"a89693f4eb910abb11c7a264ad8185ef5cebcf9a0489d143fa772919c9f03120\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:31:37.191223 env[1304]: time="2024-12-13T14:31:37.191177480Z" level=info msg="CreateContainer within sandbox \"a89693f4eb910abb11c7a264ad8185ef5cebcf9a0489d143fa772919c9f03120\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf\"" Dec 13 14:31:37.191595 env[1304]: time="2024-12-13T14:31:37.191565521Z" level=info msg="StartContainer for \"288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf\"" Dec 13 14:31:37.244747 env[1304]: time="2024-12-13T14:31:37.244690609Z" level=info msg="StartContainer for \"288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf\" returns successfully" Dec 13 14:31:37.385442 kubelet[1550]: I1213 14:31:37.385141 1550 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:31:37.402430 kubelet[1550]: E1213 14:31:37.402381 1550 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:37.415753 kubelet[1550]: E1213 14:31:37.415675 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:37.563287 kernel: Initializing XFRM netlink socket Dec 13 14:31:38.175839 kubelet[1550]: E1213 14:31:38.175802 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:31:38.187425 kubelet[1550]: I1213 14:31:38.187400 1550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-7747k" podStartSLOduration=8.733495225 podStartE2EDuration="21.18736851s" podCreationTimestamp="2024-12-13 14:31:17 +0000 UTC" firstStartedPulling="2024-12-13 14:31:20.11423131 +0000 UTC m=+3.047707258" lastFinishedPulling="2024-12-13 14:31:32.568104595 +0000 UTC m=+15.501580543" observedRunningTime="2024-12-13 14:31:38.187135964 +0000 UTC m=+21.120611912" watchObservedRunningTime="2024-12-13 14:31:38.18736851 +0000 UTC m=+21.120844458" Dec 13 14:31:38.416175 kubelet[1550]: E1213 14:31:38.416101 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:39.177230 kubelet[1550]: E1213 14:31:39.177202 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:31:39.232401 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:31:39.232581 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:31:39.233186 systemd-networkd[1085]: cilium_host: Link UP Dec 13 14:31:39.233392 systemd-networkd[1085]: cilium_net: Link UP Dec 13 14:31:39.233585 systemd-networkd[1085]: cilium_net: Gained carrier Dec 13 14:31:39.233773 systemd-networkd[1085]: cilium_host: Gained carrier Dec 13 14:31:39.307807 systemd-networkd[1085]: cilium_vxlan: Link UP Dec 13 14:31:39.307815 systemd-networkd[1085]: cilium_vxlan: Gained carrier Dec 13 14:31:39.417182 kubelet[1550]: E1213 14:31:39.417125 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:39.517461 systemd-networkd[1085]: cilium_net: Gained IPv6LL Dec 13 14:31:39.523293 kernel: NET: Registered PF_ALG protocol family Dec 13 14:31:39.668407 systemd-networkd[1085]: cilium_host: Gained IPv6LL Dec 13 14:31:39.809562 kubelet[1550]: I1213 14:31:39.796691 1550 topology_manager.go:215] "Topology Admit Handler" podUID="5f4340cd-1e58-4e36-9051-5acf873f7786" podNamespace="default" podName="nginx-deployment-6d5f899847-9nfr5" Dec 13 14:31:39.856883 kubelet[1550]: I1213 14:31:39.856846 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfthr\" (UniqueName: \"kubernetes.io/projected/5f4340cd-1e58-4e36-9051-5acf873f7786-kube-api-access-sfthr\") pod \"nginx-deployment-6d5f899847-9nfr5\" (UID: \"5f4340cd-1e58-4e36-9051-5acf873f7786\") " pod="default/nginx-deployment-6d5f899847-9nfr5" Dec 13 14:31:40.101978 env[1304]: time="2024-12-13T14:31:40.101676281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-9nfr5,Uid:5f4340cd-1e58-4e36-9051-5acf873f7786,Namespace:default,Attempt:0,}" Dec 13 14:31:40.178703 kubelet[1550]: E1213 14:31:40.178670 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:31:40.203862 systemd-networkd[1085]: lxc_health: Link UP Dec 13 14:31:40.208001 systemd-networkd[1085]: lxc_health: Gained carrier Dec 13 14:31:40.208288 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:31:40.417584 kubelet[1550]: E1213 14:31:40.417452 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:40.634396 systemd-networkd[1085]: lxcf493421c8866: Link UP Dec 13 14:31:40.643308 kernel: eth0: renamed from tmpc6150 Dec 13 14:31:40.648622 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:31:40.648712 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf493421c8866: link becomes ready Dec 13 14:31:40.648806 systemd-networkd[1085]: lxcf493421c8866: Gained carrier Dec 13 14:31:40.908474 systemd-networkd[1085]: cilium_vxlan: Gained IPv6LL Dec 13 14:31:41.418510 kubelet[1550]: E1213 14:31:41.418441 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:41.676481 systemd-networkd[1085]: lxc_health: Gained IPv6LL Dec 13 14:31:41.852657 kubelet[1550]: E1213 14:31:41.852627 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:31:42.182231 kubelet[1550]: E1213 14:31:42.182193 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:31:42.380487 systemd-networkd[1085]: lxcf493421c8866: Gained IPv6LL Dec 13 14:31:42.418641 kubelet[1550]: E1213 14:31:42.418590 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:43.418830 kubelet[1550]: E1213 14:31:43.418744 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:44.192799 env[1304]: time="2024-12-13T14:31:44.192704596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:44.192799 env[1304]: time="2024-12-13T14:31:44.192764440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:44.192799 env[1304]: time="2024-12-13T14:31:44.192777628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:44.193550 env[1304]: time="2024-12-13T14:31:44.193454021Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6150f37e765f9a4f118b2f034b520ee3b8e887944fb943cada9f6c64150e9f6 pid=2612 runtime=io.containerd.runc.v2 Dec 13 14:31:44.206124 systemd[1]: run-containerd-runc-k8s.io-c6150f37e765f9a4f118b2f034b520ee3b8e887944fb943cada9f6c64150e9f6-runc.hkGyIZ.mount: Deactivated successfully. Dec 13 14:31:44.218492 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:31:44.239684 env[1304]: time="2024-12-13T14:31:44.239627722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-9nfr5,Uid:5f4340cd-1e58-4e36-9051-5acf873f7786,Namespace:default,Attempt:0,} returns sandbox id \"c6150f37e765f9a4f118b2f034b520ee3b8e887944fb943cada9f6c64150e9f6\"" Dec 13 14:31:44.241129 env[1304]: time="2024-12-13T14:31:44.241092612Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:31:44.419888 kubelet[1550]: E1213 14:31:44.419822 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:45.420759 kubelet[1550]: E1213 14:31:45.420698 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:46.421173 kubelet[1550]: E1213 14:31:46.421103 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:47.421510 kubelet[1550]: E1213 14:31:47.421471 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:48.127424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2138110862.mount: Deactivated successfully. Dec 13 14:31:48.422664 kubelet[1550]: E1213 14:31:48.422542 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:49.422952 kubelet[1550]: E1213 14:31:49.422882 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:50.423443 kubelet[1550]: E1213 14:31:50.423364 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:50.563225 env[1304]: time="2024-12-13T14:31:50.563148643Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:50.564930 env[1304]: time="2024-12-13T14:31:50.564882355Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:50.566545 env[1304]: time="2024-12-13T14:31:50.566518890Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:50.568324 env[1304]: time="2024-12-13T14:31:50.568293454Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:50.568970 env[1304]: time="2024-12-13T14:31:50.568936643Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:31:50.570432 env[1304]: time="2024-12-13T14:31:50.570400239Z" level=info msg="CreateContainer within sandbox \"c6150f37e765f9a4f118b2f034b520ee3b8e887944fb943cada9f6c64150e9f6\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:31:50.581841 env[1304]: time="2024-12-13T14:31:50.581792478Z" level=info msg="CreateContainer within sandbox \"c6150f37e765f9a4f118b2f034b520ee3b8e887944fb943cada9f6c64150e9f6\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"2ebeac472ba4ee0dfe12bb887dca82b7d57559894fab0a5c14900d10659a460b\"" Dec 13 14:31:50.582327 env[1304]: time="2024-12-13T14:31:50.582289752Z" level=info msg="StartContainer for \"2ebeac472ba4ee0dfe12bb887dca82b7d57559894fab0a5c14900d10659a460b\"" Dec 13 14:31:50.656658 env[1304]: time="2024-12-13T14:31:50.656600526Z" level=info msg="StartContainer for \"2ebeac472ba4ee0dfe12bb887dca82b7d57559894fab0a5c14900d10659a460b\" returns successfully" Dec 13 14:31:51.424374 kubelet[1550]: E1213 14:31:51.424325 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:52.099182 update_engine[1292]: I1213 14:31:52.099119 1292 update_attempter.cc:509] Updating boot flags... Dec 13 14:31:52.424857 kubelet[1550]: E1213 14:31:52.424740 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:53.425277 kubelet[1550]: E1213 14:31:53.425208 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:54.425622 kubelet[1550]: E1213 14:31:54.425565 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:55.426188 kubelet[1550]: E1213 14:31:55.426138 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:56.426829 kubelet[1550]: E1213 14:31:56.426759 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:57.081130 kubelet[1550]: I1213 14:31:57.081087 1550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-9nfr5" podStartSLOduration=11.752579554 podStartE2EDuration="18.081037755s" podCreationTimestamp="2024-12-13 14:31:39 +0000 UTC" firstStartedPulling="2024-12-13 14:31:44.24075466 +0000 UTC m=+27.174230598" lastFinishedPulling="2024-12-13 14:31:50.569212861 +0000 UTC m=+33.502688799" observedRunningTime="2024-12-13 14:31:51.238198931 +0000 UTC m=+34.171674889" watchObservedRunningTime="2024-12-13 14:31:57.081037755 +0000 UTC m=+40.014513703" Dec 13 14:31:57.081420 kubelet[1550]: I1213 14:31:57.081399 1550 topology_manager.go:215] "Topology Admit Handler" podUID="c08fba72-8fb3-4d75-b2cd-9a47d43a96a2" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 14:31:57.150221 kubelet[1550]: I1213 14:31:57.150195 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c08fba72-8fb3-4d75-b2cd-9a47d43a96a2-data\") pod \"nfs-server-provisioner-0\" (UID: \"c08fba72-8fb3-4d75-b2cd-9a47d43a96a2\") " pod="default/nfs-server-provisioner-0" Dec 13 14:31:57.150353 kubelet[1550]: I1213 14:31:57.150230 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhv67\" (UniqueName: \"kubernetes.io/projected/c08fba72-8fb3-4d75-b2cd-9a47d43a96a2-kube-api-access-nhv67\") pod \"nfs-server-provisioner-0\" (UID: \"c08fba72-8fb3-4d75-b2cd-9a47d43a96a2\") " pod="default/nfs-server-provisioner-0" Dec 13 14:31:57.384697 env[1304]: time="2024-12-13T14:31:57.384615515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c08fba72-8fb3-4d75-b2cd-9a47d43a96a2,Namespace:default,Attempt:0,}" Dec 13 14:31:57.402861 kubelet[1550]: E1213 14:31:57.402810 1550 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:57.412946 systemd-networkd[1085]: lxc45d01f1609ff: Link UP Dec 13 14:31:57.419301 kernel: eth0: renamed from tmp959cd Dec 13 14:31:57.427099 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:31:57.427176 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc45d01f1609ff: link becomes ready Dec 13 14:31:57.427211 kubelet[1550]: E1213 14:31:57.427177 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:57.427328 systemd-networkd[1085]: lxc45d01f1609ff: Gained carrier Dec 13 14:31:57.654370 env[1304]: time="2024-12-13T14:31:57.654234310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:57.654370 env[1304]: time="2024-12-13T14:31:57.654274890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:57.654575 env[1304]: time="2024-12-13T14:31:57.654289719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:57.654575 env[1304]: time="2024-12-13T14:31:57.654397392Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/959cdedd6ff910856ad14b85821dc0aff7e4a390aef98c560caa18f8e94a80d4 pid=2754 runtime=io.containerd.runc.v2 Dec 13 14:31:57.691687 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:31:57.719044 env[1304]: time="2024-12-13T14:31:57.718994741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c08fba72-8fb3-4d75-b2cd-9a47d43a96a2,Namespace:default,Attempt:0,} returns sandbox id \"959cdedd6ff910856ad14b85821dc0aff7e4a390aef98c560caa18f8e94a80d4\"" Dec 13 14:31:57.720153 env[1304]: time="2024-12-13T14:31:57.720134085Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:31:58.427739 kubelet[1550]: E1213 14:31:58.427697 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:59.341481 systemd-networkd[1085]: lxc45d01f1609ff: Gained IPv6LL Dec 13 14:31:59.428604 kubelet[1550]: E1213 14:31:59.428574 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:00.429597 kubelet[1550]: E1213 14:32:00.429540 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:00.710042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2747288420.mount: Deactivated successfully. Dec 13 14:32:01.430106 kubelet[1550]: E1213 14:32:01.430064 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:02.431153 kubelet[1550]: E1213 14:32:02.431101 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:03.072584 env[1304]: time="2024-12-13T14:32:03.072536130Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:03.074283 env[1304]: time="2024-12-13T14:32:03.074236621Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:03.076031 env[1304]: time="2024-12-13T14:32:03.076003553Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:03.077534 env[1304]: time="2024-12-13T14:32:03.077503934Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:03.078155 env[1304]: time="2024-12-13T14:32:03.078129787Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 14:32:03.079970 env[1304]: time="2024-12-13T14:32:03.079937648Z" level=info msg="CreateContainer within sandbox \"959cdedd6ff910856ad14b85821dc0aff7e4a390aef98c560caa18f8e94a80d4\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:32:03.092741 env[1304]: time="2024-12-13T14:32:03.092699179Z" level=info msg="CreateContainer within sandbox \"959cdedd6ff910856ad14b85821dc0aff7e4a390aef98c560caa18f8e94a80d4\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"81fafb1afa1d273639e2b6abb53a931638788e99b732ab1a13a14661b820c295\"" Dec 13 14:32:03.093389 env[1304]: time="2024-12-13T14:32:03.093359969Z" level=info msg="StartContainer for \"81fafb1afa1d273639e2b6abb53a931638788e99b732ab1a13a14661b820c295\"" Dec 13 14:32:03.109241 systemd[1]: run-containerd-runc-k8s.io-81fafb1afa1d273639e2b6abb53a931638788e99b732ab1a13a14661b820c295-runc.a6PBal.mount: Deactivated successfully. Dec 13 14:32:03.133628 env[1304]: time="2024-12-13T14:32:03.133583556Z" level=info msg="StartContainer for \"81fafb1afa1d273639e2b6abb53a931638788e99b732ab1a13a14661b820c295\" returns successfully" Dec 13 14:32:03.232789 kubelet[1550]: I1213 14:32:03.232748 1550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=0.87421637 podStartE2EDuration="6.232693468s" podCreationTimestamp="2024-12-13 14:31:57 +0000 UTC" firstStartedPulling="2024-12-13 14:31:57.719923087 +0000 UTC m=+40.653399035" lastFinishedPulling="2024-12-13 14:32:03.078400175 +0000 UTC m=+46.011876133" observedRunningTime="2024-12-13 14:32:03.23236675 +0000 UTC m=+46.165842688" watchObservedRunningTime="2024-12-13 14:32:03.232693468 +0000 UTC m=+46.166169456" Dec 13 14:32:03.432041 kubelet[1550]: E1213 14:32:03.432011 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:04.432840 kubelet[1550]: E1213 14:32:04.432793 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:05.433661 kubelet[1550]: E1213 14:32:05.433592 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:06.434620 kubelet[1550]: E1213 14:32:06.434545 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:07.435419 kubelet[1550]: E1213 14:32:07.435360 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:08.436186 kubelet[1550]: E1213 14:32:08.436135 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:09.436334 kubelet[1550]: E1213 14:32:09.436285 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:10.437017 kubelet[1550]: E1213 14:32:10.436976 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:11.437962 kubelet[1550]: E1213 14:32:11.437890 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:12.439002 kubelet[1550]: E1213 14:32:12.438952 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:12.529737 kubelet[1550]: I1213 14:32:12.529680 1550 topology_manager.go:215] "Topology Admit Handler" podUID="e1cc712f-8787-4414-b174-e67b335e5e4a" podNamespace="default" podName="test-pod-1" Dec 13 14:32:12.635032 kubelet[1550]: I1213 14:32:12.635001 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6751691f-59bc-4508-acac-a75245c80b0a\" (UniqueName: \"kubernetes.io/nfs/e1cc712f-8787-4414-b174-e67b335e5e4a-pvc-6751691f-59bc-4508-acac-a75245c80b0a\") pod \"test-pod-1\" (UID: \"e1cc712f-8787-4414-b174-e67b335e5e4a\") " pod="default/test-pod-1" Dec 13 14:32:12.635202 kubelet[1550]: I1213 14:32:12.635050 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knvkk\" (UniqueName: \"kubernetes.io/projected/e1cc712f-8787-4414-b174-e67b335e5e4a-kube-api-access-knvkk\") pod \"test-pod-1\" (UID: \"e1cc712f-8787-4414-b174-e67b335e5e4a\") " pod="default/test-pod-1" Dec 13 14:32:12.755286 kernel: FS-Cache: Loaded Dec 13 14:32:12.796907 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:32:12.797061 kernel: RPC: Registered udp transport module. Dec 13 14:32:12.797097 kernel: RPC: Registered tcp transport module. Dec 13 14:32:12.797124 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:32:12.852284 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:32:13.030921 kernel: NFS: Registering the id_resolver key type Dec 13 14:32:13.031042 kernel: Key type id_resolver registered Dec 13 14:32:13.031065 kernel: Key type id_legacy registered Dec 13 14:32:13.053320 nfsidmap[2871]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 14:32:13.056773 nfsidmap[2874]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 14:32:13.133126 env[1304]: time="2024-12-13T14:32:13.133080617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e1cc712f-8787-4414-b174-e67b335e5e4a,Namespace:default,Attempt:0,}" Dec 13 14:32:13.249266 systemd-networkd[1085]: lxc6c180192efd8: Link UP Dec 13 14:32:13.260342 kernel: eth0: renamed from tmp47638 Dec 13 14:32:13.267321 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:32:13.267378 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6c180192efd8: link becomes ready Dec 13 14:32:13.267545 systemd-networkd[1085]: lxc6c180192efd8: Gained carrier Dec 13 14:32:13.439684 kubelet[1550]: E1213 14:32:13.439648 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:13.834929 env[1304]: time="2024-12-13T14:32:13.834847107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:32:13.835140 env[1304]: time="2024-12-13T14:32:13.834898847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:32:13.835140 env[1304]: time="2024-12-13T14:32:13.835115765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:32:13.835491 env[1304]: time="2024-12-13T14:32:13.835437315Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47638b8f7baf5b2763ec75d2aaa3d6d069c7baf1e7a2b4757a99d1b4f94cca35 pid=2908 runtime=io.containerd.runc.v2 Dec 13 14:32:13.856405 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:32:13.877369 env[1304]: time="2024-12-13T14:32:13.877323137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e1cc712f-8787-4414-b174-e67b335e5e4a,Namespace:default,Attempt:0,} returns sandbox id \"47638b8f7baf5b2763ec75d2aaa3d6d069c7baf1e7a2b4757a99d1b4f94cca35\"" Dec 13 14:32:13.879339 env[1304]: time="2024-12-13T14:32:13.879316283Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:32:14.225763 env[1304]: time="2024-12-13T14:32:14.225640986Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:14.228421 env[1304]: time="2024-12-13T14:32:14.228391268Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:14.229951 env[1304]: time="2024-12-13T14:32:14.229922429Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:14.231552 env[1304]: time="2024-12-13T14:32:14.231530199Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:14.232152 env[1304]: time="2024-12-13T14:32:14.232119035Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:32:14.233661 env[1304]: time="2024-12-13T14:32:14.233634186Z" level=info msg="CreateContainer within sandbox \"47638b8f7baf5b2763ec75d2aaa3d6d069c7baf1e7a2b4757a99d1b4f94cca35\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:32:14.250573 env[1304]: time="2024-12-13T14:32:14.250535310Z" level=info msg="CreateContainer within sandbox \"47638b8f7baf5b2763ec75d2aaa3d6d069c7baf1e7a2b4757a99d1b4f94cca35\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"8b8c2b8b8bd6fd557b8aba8c23fc0ab43ed2c131e06c4acf0c5492eac34f0523\"" Dec 13 14:32:14.250958 env[1304]: time="2024-12-13T14:32:14.250932596Z" level=info msg="StartContainer for \"8b8c2b8b8bd6fd557b8aba8c23fc0ab43ed2c131e06c4acf0c5492eac34f0523\"" Dec 13 14:32:14.283412 env[1304]: time="2024-12-13T14:32:14.283374300Z" level=info msg="StartContainer for \"8b8c2b8b8bd6fd557b8aba8c23fc0ab43ed2c131e06c4acf0c5492eac34f0523\" returns successfully" Dec 13 14:32:14.316468 systemd-networkd[1085]: lxc6c180192efd8: Gained IPv6LL Dec 13 14:32:14.439971 kubelet[1550]: E1213 14:32:14.439938 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:14.746601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1195666090.mount: Deactivated successfully. Dec 13 14:32:15.440704 kubelet[1550]: E1213 14:32:15.440651 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:16.441195 kubelet[1550]: E1213 14:32:16.441140 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:17.403093 kubelet[1550]: E1213 14:32:17.403027 1550 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:17.441515 kubelet[1550]: E1213 14:32:17.441457 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:18.442048 kubelet[1550]: E1213 14:32:18.441979 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:19.357738 kubelet[1550]: I1213 14:32:19.357686 1550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=22.004315125 podStartE2EDuration="22.357626658s" podCreationTimestamp="2024-12-13 14:31:57 +0000 UTC" firstStartedPulling="2024-12-13 14:32:13.879053706 +0000 UTC m=+56.812529654" lastFinishedPulling="2024-12-13 14:32:14.232365239 +0000 UTC m=+57.165841187" observedRunningTime="2024-12-13 14:32:15.253765035 +0000 UTC m=+58.187240983" watchObservedRunningTime="2024-12-13 14:32:19.357626658 +0000 UTC m=+62.291102626" Dec 13 14:32:19.382251 env[1304]: time="2024-12-13T14:32:19.382098570Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:32:19.387200 env[1304]: time="2024-12-13T14:32:19.387164803Z" level=info msg="StopContainer for \"288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf\" with timeout 2 (s)" Dec 13 14:32:19.387487 env[1304]: time="2024-12-13T14:32:19.387449049Z" level=info msg="Stop container \"288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf\" with signal terminated" Dec 13 14:32:19.392762 systemd-networkd[1085]: lxc_health: Link DOWN Dec 13 14:32:19.392770 systemd-networkd[1085]: lxc_health: Lost carrier Dec 13 14:32:19.442348 kubelet[1550]: E1213 14:32:19.442303 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:19.446830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf-rootfs.mount: Deactivated successfully. Dec 13 14:32:19.457914 env[1304]: time="2024-12-13T14:32:19.457862217Z" level=info msg="shim disconnected" id=288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf Dec 13 14:32:19.458038 env[1304]: time="2024-12-13T14:32:19.457918315Z" level=warning msg="cleaning up after shim disconnected" id=288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf namespace=k8s.io Dec 13 14:32:19.458038 env[1304]: time="2024-12-13T14:32:19.457927383Z" level=info msg="cleaning up dead shim" Dec 13 14:32:19.464172 env[1304]: time="2024-12-13T14:32:19.464132823Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:32:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3041 runtime=io.containerd.runc.v2\n" Dec 13 14:32:19.466876 env[1304]: time="2024-12-13T14:32:19.466841507Z" level=info msg="StopContainer for \"288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf\" returns successfully" Dec 13 14:32:19.467603 env[1304]: time="2024-12-13T14:32:19.467564285Z" level=info msg="StopPodSandbox for \"a89693f4eb910abb11c7a264ad8185ef5cebcf9a0489d143fa772919c9f03120\"" Dec 13 14:32:19.467666 env[1304]: time="2024-12-13T14:32:19.467645442Z" level=info msg="Container to stop \"d04dab69a2053eb1afe6459dfde54f97ca2a0983bcea2ba73dcff48ddb988ec7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:32:19.467698 env[1304]: time="2024-12-13T14:32:19.467663937Z" level=info msg="Container to stop \"7dee5ce13d8e27031fbe4722bf96f4294ef4ffa1e3f6db6518b921bb69a6fc5f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:32:19.467698 env[1304]: time="2024-12-13T14:32:19.467676381Z" level=info msg="Container to stop \"434fb43126bcffb5a2b618e8e74ebffbcd1546fed9e7adfe17a2ebaf1decb979\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:32:19.467698 env[1304]: time="2024-12-13T14:32:19.467691330Z" level=info msg="Container to stop \"9d6a2589d42ed1c891e9ec141a8c7120989f5ca22674faebb7673fa06aa2d80d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:32:19.467773 env[1304]: time="2024-12-13T14:32:19.467705257Z" level=info msg="Container to stop \"288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:32:19.470035 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a89693f4eb910abb11c7a264ad8185ef5cebcf9a0489d143fa772919c9f03120-shm.mount: Deactivated successfully. Dec 13 14:32:19.484830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a89693f4eb910abb11c7a264ad8185ef5cebcf9a0489d143fa772919c9f03120-rootfs.mount: Deactivated successfully. Dec 13 14:32:19.488557 env[1304]: time="2024-12-13T14:32:19.488516837Z" level=info msg="shim disconnected" id=a89693f4eb910abb11c7a264ad8185ef5cebcf9a0489d143fa772919c9f03120 Dec 13 14:32:19.488651 env[1304]: time="2024-12-13T14:32:19.488559459Z" level=warning msg="cleaning up after shim disconnected" id=a89693f4eb910abb11c7a264ad8185ef5cebcf9a0489d143fa772919c9f03120 namespace=k8s.io Dec 13 14:32:19.488651 env[1304]: time="2024-12-13T14:32:19.488572304Z" level=info msg="cleaning up dead shim" Dec 13 14:32:19.494811 env[1304]: time="2024-12-13T14:32:19.494781442Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:32:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3074 runtime=io.containerd.runc.v2\n" Dec 13 14:32:19.495085 env[1304]: time="2024-12-13T14:32:19.495053224Z" level=info msg="TearDown network for sandbox \"a89693f4eb910abb11c7a264ad8185ef5cebcf9a0489d143fa772919c9f03120\" successfully" Dec 13 14:32:19.495085 env[1304]: time="2024-12-13T14:32:19.495078914Z" level=info msg="StopPodSandbox for \"a89693f4eb910abb11c7a264ad8185ef5cebcf9a0489d143fa772919c9f03120\" returns successfully" Dec 13 14:32:19.675833 kubelet[1550]: I1213 14:32:19.675727 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-host-proc-sys-net\") pod \"04273870-b99d-4e76-8c46-82ae0dfdfa26\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " Dec 13 14:32:19.675833 kubelet[1550]: I1213 14:32:19.675769 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04273870-b99d-4e76-8c46-82ae0dfdfa26-cilium-config-path\") pod \"04273870-b99d-4e76-8c46-82ae0dfdfa26\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " Dec 13 14:32:19.675833 kubelet[1550]: I1213 14:32:19.675789 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04273870-b99d-4e76-8c46-82ae0dfdfa26-hubble-tls\") pod \"04273870-b99d-4e76-8c46-82ae0dfdfa26\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " Dec 13 14:32:19.675833 kubelet[1550]: I1213 14:32:19.675806 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-etc-cni-netd\") pod \"04273870-b99d-4e76-8c46-82ae0dfdfa26\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " Dec 13 14:32:19.675833 kubelet[1550]: I1213 14:32:19.675823 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-cilium-cgroup\") pod \"04273870-b99d-4e76-8c46-82ae0dfdfa26\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " Dec 13 14:32:19.675833 kubelet[1550]: I1213 14:32:19.675840 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-bpf-maps\") pod \"04273870-b99d-4e76-8c46-82ae0dfdfa26\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " Dec 13 14:32:19.676104 kubelet[1550]: I1213 14:32:19.675854 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-lib-modules\") pod \"04273870-b99d-4e76-8c46-82ae0dfdfa26\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " Dec 13 14:32:19.676104 kubelet[1550]: I1213 14:32:19.675871 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-hostproc\") pod \"04273870-b99d-4e76-8c46-82ae0dfdfa26\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " Dec 13 14:32:19.676104 kubelet[1550]: I1213 14:32:19.675891 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04273870-b99d-4e76-8c46-82ae0dfdfa26-clustermesh-secrets\") pod \"04273870-b99d-4e76-8c46-82ae0dfdfa26\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " Dec 13 14:32:19.676104 kubelet[1550]: I1213 14:32:19.675906 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-cilium-run\") pod \"04273870-b99d-4e76-8c46-82ae0dfdfa26\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " Dec 13 14:32:19.676104 kubelet[1550]: I1213 14:32:19.675923 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlw7n\" (UniqueName: \"kubernetes.io/projected/04273870-b99d-4e76-8c46-82ae0dfdfa26-kube-api-access-zlw7n\") pod \"04273870-b99d-4e76-8c46-82ae0dfdfa26\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " Dec 13 14:32:19.676104 kubelet[1550]: I1213 14:32:19.675937 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-host-proc-sys-kernel\") pod \"04273870-b99d-4e76-8c46-82ae0dfdfa26\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " Dec 13 14:32:19.676247 kubelet[1550]: I1213 14:32:19.675953 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-cni-path\") pod \"04273870-b99d-4e76-8c46-82ae0dfdfa26\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " Dec 13 14:32:19.676247 kubelet[1550]: I1213 14:32:19.675966 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-xtables-lock\") pod \"04273870-b99d-4e76-8c46-82ae0dfdfa26\" (UID: \"04273870-b99d-4e76-8c46-82ae0dfdfa26\") " Dec 13 14:32:19.676247 kubelet[1550]: I1213 14:32:19.676006 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "04273870-b99d-4e76-8c46-82ae0dfdfa26" (UID: "04273870-b99d-4e76-8c46-82ae0dfdfa26"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:32:19.676347 kubelet[1550]: I1213 14:32:19.676314 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-hostproc" (OuterVolumeSpecName: "hostproc") pod "04273870-b99d-4e76-8c46-82ae0dfdfa26" (UID: "04273870-b99d-4e76-8c46-82ae0dfdfa26"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:32:19.676347 kubelet[1550]: I1213 14:32:19.676334 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "04273870-b99d-4e76-8c46-82ae0dfdfa26" (UID: "04273870-b99d-4e76-8c46-82ae0dfdfa26"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:32:19.676402 kubelet[1550]: I1213 14:32:19.676348 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "04273870-b99d-4e76-8c46-82ae0dfdfa26" (UID: "04273870-b99d-4e76-8c46-82ae0dfdfa26"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:32:19.676402 kubelet[1550]: I1213 14:32:19.676360 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "04273870-b99d-4e76-8c46-82ae0dfdfa26" (UID: "04273870-b99d-4e76-8c46-82ae0dfdfa26"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:32:19.676402 kubelet[1550]: I1213 14:32:19.676372 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "04273870-b99d-4e76-8c46-82ae0dfdfa26" (UID: "04273870-b99d-4e76-8c46-82ae0dfdfa26"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:32:19.676402 kubelet[1550]: I1213 14:32:19.676384 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "04273870-b99d-4e76-8c46-82ae0dfdfa26" (UID: "04273870-b99d-4e76-8c46-82ae0dfdfa26"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:32:19.677041 kubelet[1550]: I1213 14:32:19.676903 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "04273870-b99d-4e76-8c46-82ae0dfdfa26" (UID: "04273870-b99d-4e76-8c46-82ae0dfdfa26"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:32:19.677041 kubelet[1550]: I1213 14:32:19.676964 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "04273870-b99d-4e76-8c46-82ae0dfdfa26" (UID: "04273870-b99d-4e76-8c46-82ae0dfdfa26"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:32:19.677041 kubelet[1550]: I1213 14:32:19.676987 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-cni-path" (OuterVolumeSpecName: "cni-path") pod "04273870-b99d-4e76-8c46-82ae0dfdfa26" (UID: "04273870-b99d-4e76-8c46-82ae0dfdfa26"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:32:19.678206 kubelet[1550]: I1213 14:32:19.678161 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04273870-b99d-4e76-8c46-82ae0dfdfa26-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "04273870-b99d-4e76-8c46-82ae0dfdfa26" (UID: "04273870-b99d-4e76-8c46-82ae0dfdfa26"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:32:19.678928 kubelet[1550]: I1213 14:32:19.678888 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04273870-b99d-4e76-8c46-82ae0dfdfa26-kube-api-access-zlw7n" (OuterVolumeSpecName: "kube-api-access-zlw7n") pod "04273870-b99d-4e76-8c46-82ae0dfdfa26" (UID: "04273870-b99d-4e76-8c46-82ae0dfdfa26"). InnerVolumeSpecName "kube-api-access-zlw7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:32:19.680162 systemd[1]: var-lib-kubelet-pods-04273870\x2db99d\x2d4e76\x2d8c46\x2d82ae0dfdfa26-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzlw7n.mount: Deactivated successfully. Dec 13 14:32:19.680410 kubelet[1550]: I1213 14:32:19.680209 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04273870-b99d-4e76-8c46-82ae0dfdfa26-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "04273870-b99d-4e76-8c46-82ae0dfdfa26" (UID: "04273870-b99d-4e76-8c46-82ae0dfdfa26"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:32:19.680319 systemd[1]: var-lib-kubelet-pods-04273870\x2db99d\x2d4e76\x2d8c46\x2d82ae0dfdfa26-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:32:19.680587 kubelet[1550]: I1213 14:32:19.680565 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04273870-b99d-4e76-8c46-82ae0dfdfa26-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "04273870-b99d-4e76-8c46-82ae0dfdfa26" (UID: "04273870-b99d-4e76-8c46-82ae0dfdfa26"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:32:19.776723 kubelet[1550]: I1213 14:32:19.776675 1550 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-lib-modules\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:19.776723 kubelet[1550]: I1213 14:32:19.776700 1550 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-host-proc-sys-kernel\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:19.776723 kubelet[1550]: I1213 14:32:19.776711 1550 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-cni-path\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:19.776723 kubelet[1550]: I1213 14:32:19.776719 1550 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-xtables-lock\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:19.776723 kubelet[1550]: I1213 14:32:19.776727 1550 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-hostproc\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:19.776723 kubelet[1550]: I1213 14:32:19.776736 1550 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04273870-b99d-4e76-8c46-82ae0dfdfa26-clustermesh-secrets\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:19.776723 kubelet[1550]: I1213 14:32:19.776744 1550 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-cilium-run\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:19.777025 kubelet[1550]: I1213 14:32:19.776755 1550 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zlw7n\" (UniqueName: \"kubernetes.io/projected/04273870-b99d-4e76-8c46-82ae0dfdfa26-kube-api-access-zlw7n\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:19.777025 kubelet[1550]: I1213 14:32:19.776767 1550 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-etc-cni-netd\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:19.777025 kubelet[1550]: I1213 14:32:19.776775 1550 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-host-proc-sys-net\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:19.777025 kubelet[1550]: I1213 14:32:19.776785 1550 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04273870-b99d-4e76-8c46-82ae0dfdfa26-cilium-config-path\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:19.777025 kubelet[1550]: I1213 14:32:19.776796 1550 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04273870-b99d-4e76-8c46-82ae0dfdfa26-hubble-tls\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:19.777025 kubelet[1550]: I1213 14:32:19.776805 1550 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-cilium-cgroup\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:19.777025 kubelet[1550]: I1213 14:32:19.776841 1550 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04273870-b99d-4e76-8c46-82ae0dfdfa26-bpf-maps\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:20.256391 kubelet[1550]: I1213 14:32:20.256345 1550 scope.go:117] "RemoveContainer" containerID="288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf" Dec 13 14:32:20.257910 env[1304]: time="2024-12-13T14:32:20.257864255Z" level=info msg="RemoveContainer for \"288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf\"" Dec 13 14:32:20.263239 env[1304]: time="2024-12-13T14:32:20.263198436Z" level=info msg="RemoveContainer for \"288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf\" returns successfully" Dec 13 14:32:20.263441 kubelet[1550]: I1213 14:32:20.263411 1550 scope.go:117] "RemoveContainer" containerID="434fb43126bcffb5a2b618e8e74ebffbcd1546fed9e7adfe17a2ebaf1decb979" Dec 13 14:32:20.264429 env[1304]: time="2024-12-13T14:32:20.264406555Z" level=info msg="RemoveContainer for \"434fb43126bcffb5a2b618e8e74ebffbcd1546fed9e7adfe17a2ebaf1decb979\"" Dec 13 14:32:20.267442 env[1304]: time="2024-12-13T14:32:20.267419450Z" level=info msg="RemoveContainer for \"434fb43126bcffb5a2b618e8e74ebffbcd1546fed9e7adfe17a2ebaf1decb979\" returns successfully" Dec 13 14:32:20.267561 kubelet[1550]: I1213 14:32:20.267544 1550 scope.go:117] "RemoveContainer" containerID="7dee5ce13d8e27031fbe4722bf96f4294ef4ffa1e3f6db6518b921bb69a6fc5f" Dec 13 14:32:20.268368 env[1304]: time="2024-12-13T14:32:20.268337803Z" level=info msg="RemoveContainer for \"7dee5ce13d8e27031fbe4722bf96f4294ef4ffa1e3f6db6518b921bb69a6fc5f\"" Dec 13 14:32:20.271055 env[1304]: time="2024-12-13T14:32:20.271012298Z" level=info msg="RemoveContainer for \"7dee5ce13d8e27031fbe4722bf96f4294ef4ffa1e3f6db6518b921bb69a6fc5f\" returns successfully" Dec 13 14:32:20.271161 kubelet[1550]: I1213 14:32:20.271140 1550 scope.go:117] "RemoveContainer" containerID="d04dab69a2053eb1afe6459dfde54f97ca2a0983bcea2ba73dcff48ddb988ec7" Dec 13 14:32:20.272005 env[1304]: time="2024-12-13T14:32:20.271960910Z" level=info msg="RemoveContainer for \"d04dab69a2053eb1afe6459dfde54f97ca2a0983bcea2ba73dcff48ddb988ec7\"" Dec 13 14:32:20.274815 env[1304]: time="2024-12-13T14:32:20.274791995Z" level=info msg="RemoveContainer for \"d04dab69a2053eb1afe6459dfde54f97ca2a0983bcea2ba73dcff48ddb988ec7\" returns successfully" Dec 13 14:32:20.274939 kubelet[1550]: I1213 14:32:20.274925 1550 scope.go:117] "RemoveContainer" containerID="9d6a2589d42ed1c891e9ec141a8c7120989f5ca22674faebb7673fa06aa2d80d" Dec 13 14:32:20.275729 env[1304]: time="2024-12-13T14:32:20.275704687Z" level=info msg="RemoveContainer for \"9d6a2589d42ed1c891e9ec141a8c7120989f5ca22674faebb7673fa06aa2d80d\"" Dec 13 14:32:20.278118 env[1304]: time="2024-12-13T14:32:20.278067444Z" level=info msg="RemoveContainer for \"9d6a2589d42ed1c891e9ec141a8c7120989f5ca22674faebb7673fa06aa2d80d\" returns successfully" Dec 13 14:32:20.278286 kubelet[1550]: I1213 14:32:20.278238 1550 scope.go:117] "RemoveContainer" containerID="288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf" Dec 13 14:32:20.278499 env[1304]: time="2024-12-13T14:32:20.278435170Z" level=error msg="ContainerStatus for \"288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf\": not found" Dec 13 14:32:20.278618 kubelet[1550]: E1213 14:32:20.278600 1550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf\": not found" containerID="288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf" Dec 13 14:32:20.278692 kubelet[1550]: I1213 14:32:20.278675 1550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf"} err="failed to get container status \"288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf\": rpc error: code = NotFound desc = an error occurred when try to find container \"288069de0453f5821d2d6f6bd51b8804291c329e426cc2944017d2b070796fcf\": not found" Dec 13 14:32:20.278692 kubelet[1550]: I1213 14:32:20.278690 1550 scope.go:117] "RemoveContainer" containerID="434fb43126bcffb5a2b618e8e74ebffbcd1546fed9e7adfe17a2ebaf1decb979" Dec 13 14:32:20.278893 env[1304]: time="2024-12-13T14:32:20.278837853Z" level=error msg="ContainerStatus for \"434fb43126bcffb5a2b618e8e74ebffbcd1546fed9e7adfe17a2ebaf1decb979\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"434fb43126bcffb5a2b618e8e74ebffbcd1546fed9e7adfe17a2ebaf1decb979\": not found" Dec 13 14:32:20.279065 kubelet[1550]: E1213 14:32:20.279023 1550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"434fb43126bcffb5a2b618e8e74ebffbcd1546fed9e7adfe17a2ebaf1decb979\": not found" containerID="434fb43126bcffb5a2b618e8e74ebffbcd1546fed9e7adfe17a2ebaf1decb979" Dec 13 14:32:20.279117 kubelet[1550]: I1213 14:32:20.279073 1550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"434fb43126bcffb5a2b618e8e74ebffbcd1546fed9e7adfe17a2ebaf1decb979"} err="failed to get container status \"434fb43126bcffb5a2b618e8e74ebffbcd1546fed9e7adfe17a2ebaf1decb979\": rpc error: code = NotFound desc = an error occurred when try to find container \"434fb43126bcffb5a2b618e8e74ebffbcd1546fed9e7adfe17a2ebaf1decb979\": not found" Dec 13 14:32:20.279117 kubelet[1550]: I1213 14:32:20.279084 1550 scope.go:117] "RemoveContainer" containerID="7dee5ce13d8e27031fbe4722bf96f4294ef4ffa1e3f6db6518b921bb69a6fc5f" Dec 13 14:32:20.279344 env[1304]: time="2024-12-13T14:32:20.279286334Z" level=error msg="ContainerStatus for \"7dee5ce13d8e27031fbe4722bf96f4294ef4ffa1e3f6db6518b921bb69a6fc5f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7dee5ce13d8e27031fbe4722bf96f4294ef4ffa1e3f6db6518b921bb69a6fc5f\": not found" Dec 13 14:32:20.279486 kubelet[1550]: E1213 14:32:20.279459 1550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7dee5ce13d8e27031fbe4722bf96f4294ef4ffa1e3f6db6518b921bb69a6fc5f\": not found" containerID="7dee5ce13d8e27031fbe4722bf96f4294ef4ffa1e3f6db6518b921bb69a6fc5f" Dec 13 14:32:20.279547 kubelet[1550]: I1213 14:32:20.279502 1550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7dee5ce13d8e27031fbe4722bf96f4294ef4ffa1e3f6db6518b921bb69a6fc5f"} err="failed to get container status \"7dee5ce13d8e27031fbe4722bf96f4294ef4ffa1e3f6db6518b921bb69a6fc5f\": rpc error: code = NotFound desc = an error occurred when try to find container \"7dee5ce13d8e27031fbe4722bf96f4294ef4ffa1e3f6db6518b921bb69a6fc5f\": not found" Dec 13 14:32:20.279547 kubelet[1550]: I1213 14:32:20.279517 1550 scope.go:117] "RemoveContainer" containerID="d04dab69a2053eb1afe6459dfde54f97ca2a0983bcea2ba73dcff48ddb988ec7" Dec 13 14:32:20.279733 env[1304]: time="2024-12-13T14:32:20.279689408Z" level=error msg="ContainerStatus for \"d04dab69a2053eb1afe6459dfde54f97ca2a0983bcea2ba73dcff48ddb988ec7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d04dab69a2053eb1afe6459dfde54f97ca2a0983bcea2ba73dcff48ddb988ec7\": not found" Dec 13 14:32:20.279830 kubelet[1550]: E1213 14:32:20.279803 1550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d04dab69a2053eb1afe6459dfde54f97ca2a0983bcea2ba73dcff48ddb988ec7\": not found" containerID="d04dab69a2053eb1afe6459dfde54f97ca2a0983bcea2ba73dcff48ddb988ec7" Dec 13 14:32:20.279830 kubelet[1550]: I1213 14:32:20.279827 1550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d04dab69a2053eb1afe6459dfde54f97ca2a0983bcea2ba73dcff48ddb988ec7"} err="failed to get container status \"d04dab69a2053eb1afe6459dfde54f97ca2a0983bcea2ba73dcff48ddb988ec7\": rpc error: code = NotFound desc = an error occurred when try to find container \"d04dab69a2053eb1afe6459dfde54f97ca2a0983bcea2ba73dcff48ddb988ec7\": not found" Dec 13 14:32:20.279900 kubelet[1550]: I1213 14:32:20.279835 1550 scope.go:117] "RemoveContainer" containerID="9d6a2589d42ed1c891e9ec141a8c7120989f5ca22674faebb7673fa06aa2d80d" Dec 13 14:32:20.280113 env[1304]: time="2024-12-13T14:32:20.280032036Z" level=error msg="ContainerStatus for \"9d6a2589d42ed1c891e9ec141a8c7120989f5ca22674faebb7673fa06aa2d80d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d6a2589d42ed1c891e9ec141a8c7120989f5ca22674faebb7673fa06aa2d80d\": not found" Dec 13 14:32:20.280236 kubelet[1550]: E1213 14:32:20.280220 1550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d6a2589d42ed1c891e9ec141a8c7120989f5ca22674faebb7673fa06aa2d80d\": not found" containerID="9d6a2589d42ed1c891e9ec141a8c7120989f5ca22674faebb7673fa06aa2d80d" Dec 13 14:32:20.280304 kubelet[1550]: I1213 14:32:20.280245 1550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d6a2589d42ed1c891e9ec141a8c7120989f5ca22674faebb7673fa06aa2d80d"} err="failed to get container status \"9d6a2589d42ed1c891e9ec141a8c7120989f5ca22674faebb7673fa06aa2d80d\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d6a2589d42ed1c891e9ec141a8c7120989f5ca22674faebb7673fa06aa2d80d\": not found" Dec 13 14:32:20.369389 systemd[1]: var-lib-kubelet-pods-04273870\x2db99d\x2d4e76\x2d8c46\x2d82ae0dfdfa26-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:32:20.443528 kubelet[1550]: E1213 14:32:20.443477 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:21.444437 kubelet[1550]: E1213 14:32:21.444367 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:21.946699 kubelet[1550]: I1213 14:32:21.946652 1550 topology_manager.go:215] "Topology Admit Handler" podUID="dd1615f3-e9cc-43d5-b5a8-9edb754ac0ac" podNamespace="kube-system" podName="cilium-operator-5cc964979-stdln" Dec 13 14:32:21.946699 kubelet[1550]: E1213 14:32:21.946713 1550 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="04273870-b99d-4e76-8c46-82ae0dfdfa26" containerName="mount-cgroup" Dec 13 14:32:21.946978 kubelet[1550]: E1213 14:32:21.946722 1550 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="04273870-b99d-4e76-8c46-82ae0dfdfa26" containerName="mount-bpf-fs" Dec 13 14:32:21.946978 kubelet[1550]: E1213 14:32:21.946729 1550 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="04273870-b99d-4e76-8c46-82ae0dfdfa26" containerName="apply-sysctl-overwrites" Dec 13 14:32:21.946978 kubelet[1550]: E1213 14:32:21.946735 1550 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="04273870-b99d-4e76-8c46-82ae0dfdfa26" containerName="clean-cilium-state" Dec 13 14:32:21.946978 kubelet[1550]: E1213 14:32:21.946741 1550 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="04273870-b99d-4e76-8c46-82ae0dfdfa26" containerName="cilium-agent" Dec 13 14:32:21.946978 kubelet[1550]: I1213 14:32:21.946755 1550 memory_manager.go:354] "RemoveStaleState removing state" podUID="04273870-b99d-4e76-8c46-82ae0dfdfa26" containerName="cilium-agent" Dec 13 14:32:21.949683 kubelet[1550]: I1213 14:32:21.949666 1550 topology_manager.go:215] "Topology Admit Handler" podUID="6d35c297-b994-4717-9bf7-2ff0120dfc34" podNamespace="kube-system" podName="cilium-zp9kg" Dec 13 14:32:22.089483 kubelet[1550]: I1213 14:32:22.089419 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd1615f3-e9cc-43d5-b5a8-9edb754ac0ac-cilium-config-path\") pod \"cilium-operator-5cc964979-stdln\" (UID: \"dd1615f3-e9cc-43d5-b5a8-9edb754ac0ac\") " pod="kube-system/cilium-operator-5cc964979-stdln" Dec 13 14:32:22.089483 kubelet[1550]: I1213 14:32:22.089475 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-hostproc\") pod \"cilium-zp9kg\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " pod="kube-system/cilium-zp9kg" Dec 13 14:32:22.089697 kubelet[1550]: I1213 14:32:22.089537 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6d35c297-b994-4717-9bf7-2ff0120dfc34-cilium-ipsec-secrets\") pod \"cilium-zp9kg\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " pod="kube-system/cilium-zp9kg" Dec 13 14:32:22.089697 kubelet[1550]: I1213 14:32:22.089589 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25t9k\" (UniqueName: \"kubernetes.io/projected/6d35c297-b994-4717-9bf7-2ff0120dfc34-kube-api-access-25t9k\") pod \"cilium-zp9kg\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " pod="kube-system/cilium-zp9kg" Dec 13 14:32:22.089765 kubelet[1550]: I1213 14:32:22.089742 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-cni-path\") pod \"cilium-zp9kg\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " pod="kube-system/cilium-zp9kg" Dec 13 14:32:22.089810 kubelet[1550]: I1213 14:32:22.089775 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d35c297-b994-4717-9bf7-2ff0120dfc34-cilium-config-path\") pod \"cilium-zp9kg\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " pod="kube-system/cilium-zp9kg" Dec 13 14:32:22.089810 kubelet[1550]: I1213 14:32:22.089793 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-cilium-run\") pod \"cilium-zp9kg\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " pod="kube-system/cilium-zp9kg" Dec 13 14:32:22.089862 kubelet[1550]: I1213 14:32:22.089812 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-etc-cni-netd\") pod \"cilium-zp9kg\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " pod="kube-system/cilium-zp9kg" Dec 13 14:32:22.089862 kubelet[1550]: I1213 14:32:22.089829 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-lib-modules\") pod \"cilium-zp9kg\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " pod="kube-system/cilium-zp9kg" Dec 13 14:32:22.089862 kubelet[1550]: I1213 14:32:22.089848 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-bpf-maps\") pod \"cilium-zp9kg\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " pod="kube-system/cilium-zp9kg" Dec 13 14:32:22.089932 kubelet[1550]: I1213 14:32:22.089870 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-cilium-cgroup\") pod \"cilium-zp9kg\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " pod="kube-system/cilium-zp9kg" Dec 13 14:32:22.089932 kubelet[1550]: I1213 14:32:22.089888 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-host-proc-sys-net\") pod \"cilium-zp9kg\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " pod="kube-system/cilium-zp9kg" Dec 13 14:32:22.089932 kubelet[1550]: I1213 14:32:22.089907 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxmf4\" (UniqueName: \"kubernetes.io/projected/dd1615f3-e9cc-43d5-b5a8-9edb754ac0ac-kube-api-access-bxmf4\") pod \"cilium-operator-5cc964979-stdln\" (UID: \"dd1615f3-e9cc-43d5-b5a8-9edb754ac0ac\") " pod="kube-system/cilium-operator-5cc964979-stdln" Dec 13 14:32:22.089932 kubelet[1550]: I1213 14:32:22.089925 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d35c297-b994-4717-9bf7-2ff0120dfc34-clustermesh-secrets\") pod \"cilium-zp9kg\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " pod="kube-system/cilium-zp9kg" Dec 13 14:32:22.090021 kubelet[1550]: I1213 14:32:22.089943 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d35c297-b994-4717-9bf7-2ff0120dfc34-hubble-tls\") pod \"cilium-zp9kg\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " pod="kube-system/cilium-zp9kg" Dec 13 14:32:22.090021 kubelet[1550]: I1213 14:32:22.089962 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-host-proc-sys-kernel\") pod \"cilium-zp9kg\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " pod="kube-system/cilium-zp9kg" Dec 13 14:32:22.090021 kubelet[1550]: I1213 14:32:22.089981 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-xtables-lock\") pod \"cilium-zp9kg\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " pod="kube-system/cilium-zp9kg" Dec 13 14:32:22.133279 kubelet[1550]: I1213 14:32:22.133228 1550 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="04273870-b99d-4e76-8c46-82ae0dfdfa26" path="/var/lib/kubelet/pods/04273870-b99d-4e76-8c46-82ae0dfdfa26/volumes" Dec 13 14:32:22.283375 kubelet[1550]: E1213 14:32:22.283275 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:32:22.283888 env[1304]: time="2024-12-13T14:32:22.283846081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zp9kg,Uid:6d35c297-b994-4717-9bf7-2ff0120dfc34,Namespace:kube-system,Attempt:0,}" Dec 13 14:32:22.301871 env[1304]: time="2024-12-13T14:32:22.301803641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:32:22.301871 env[1304]: time="2024-12-13T14:32:22.301841464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:32:22.301871 env[1304]: time="2024-12-13T14:32:22.301852946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:32:22.302046 env[1304]: time="2024-12-13T14:32:22.301990820Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f60a68f4acf03adb0dd86c614d834375614c442d6d7ddcc706bd22d81fc3178a pid=3104 runtime=io.containerd.runc.v2 Dec 13 14:32:22.329254 env[1304]: time="2024-12-13T14:32:22.329199023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zp9kg,Uid:6d35c297-b994-4717-9bf7-2ff0120dfc34,Namespace:kube-system,Attempt:0,} returns sandbox id \"f60a68f4acf03adb0dd86c614d834375614c442d6d7ddcc706bd22d81fc3178a\"" Dec 13 14:32:22.330353 kubelet[1550]: E1213 14:32:22.330328 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:32:22.332365 env[1304]: time="2024-12-13T14:32:22.332296675Z" level=info msg="CreateContainer within sandbox \"f60a68f4acf03adb0dd86c614d834375614c442d6d7ddcc706bd22d81fc3178a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:32:22.349623 env[1304]: time="2024-12-13T14:32:22.349547520Z" level=info msg="CreateContainer within sandbox \"f60a68f4acf03adb0dd86c614d834375614c442d6d7ddcc706bd22d81fc3178a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6393a8bef034c0e5466cc225fdb0a844c3778fe68e85d36ce987829a643ff347\"" Dec 13 14:32:22.350162 env[1304]: time="2024-12-13T14:32:22.350127663Z" level=info msg="StartContainer for \"6393a8bef034c0e5466cc225fdb0a844c3778fe68e85d36ce987829a643ff347\"" Dec 13 14:32:22.385649 env[1304]: time="2024-12-13T14:32:22.385591690Z" level=info msg="StartContainer for \"6393a8bef034c0e5466cc225fdb0a844c3778fe68e85d36ce987829a643ff347\" returns successfully" Dec 13 14:32:22.417284 env[1304]: time="2024-12-13T14:32:22.417194723Z" level=info msg="shim disconnected" id=6393a8bef034c0e5466cc225fdb0a844c3778fe68e85d36ce987829a643ff347 Dec 13 14:32:22.417284 env[1304]: time="2024-12-13T14:32:22.417252773Z" level=warning msg="cleaning up after shim disconnected" id=6393a8bef034c0e5466cc225fdb0a844c3778fe68e85d36ce987829a643ff347 namespace=k8s.io Dec 13 14:32:22.417284 env[1304]: time="2024-12-13T14:32:22.417274495Z" level=info msg="cleaning up dead shim" Dec 13 14:32:22.424350 env[1304]: time="2024-12-13T14:32:22.424298727Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:32:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3189 runtime=io.containerd.runc.v2\n" Dec 13 14:32:22.444999 kubelet[1550]: E1213 14:32:22.444963 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:22.549725 kubelet[1550]: E1213 14:32:22.549657 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:32:22.550114 env[1304]: time="2024-12-13T14:32:22.550054526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-stdln,Uid:dd1615f3-e9cc-43d5-b5a8-9edb754ac0ac,Namespace:kube-system,Attempt:0,}" Dec 13 14:32:22.782951 env[1304]: time="2024-12-13T14:32:22.782881469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:32:22.782951 env[1304]: time="2024-12-13T14:32:22.782921426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:32:22.782951 env[1304]: time="2024-12-13T14:32:22.782933990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:32:22.783183 env[1304]: time="2024-12-13T14:32:22.783112483Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/757e833e41015415d1c9ce00f51099be472a1658264ecf1fdec2e09da635cf8e pid=3209 runtime=io.containerd.runc.v2 Dec 13 14:32:22.826252 env[1304]: time="2024-12-13T14:32:22.826206092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-stdln,Uid:dd1615f3-e9cc-43d5-b5a8-9edb754ac0ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"757e833e41015415d1c9ce00f51099be472a1658264ecf1fdec2e09da635cf8e\"" Dec 13 14:32:22.826884 kubelet[1550]: E1213 14:32:22.826865 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:32:22.827601 env[1304]: time="2024-12-13T14:32:22.827574197Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:32:23.242655 kubelet[1550]: E1213 14:32:23.242579 1550 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:32:23.263052 env[1304]: time="2024-12-13T14:32:23.263013403Z" level=info msg="StopPodSandbox for \"f60a68f4acf03adb0dd86c614d834375614c442d6d7ddcc706bd22d81fc3178a\"" Dec 13 14:32:23.263143 env[1304]: time="2024-12-13T14:32:23.263076775Z" level=info msg="Container to stop \"6393a8bef034c0e5466cc225fdb0a844c3778fe68e85d36ce987829a643ff347\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:32:23.266524 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f60a68f4acf03adb0dd86c614d834375614c442d6d7ddcc706bd22d81fc3178a-shm.mount: Deactivated successfully. Dec 13 14:32:23.279038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f60a68f4acf03adb0dd86c614d834375614c442d6d7ddcc706bd22d81fc3178a-rootfs.mount: Deactivated successfully. Dec 13 14:32:23.286839 env[1304]: time="2024-12-13T14:32:23.286789843Z" level=info msg="shim disconnected" id=f60a68f4acf03adb0dd86c614d834375614c442d6d7ddcc706bd22d81fc3178a Dec 13 14:32:23.287225 env[1304]: time="2024-12-13T14:32:23.286839117Z" level=warning msg="cleaning up after shim disconnected" id=f60a68f4acf03adb0dd86c614d834375614c442d6d7ddcc706bd22d81fc3178a namespace=k8s.io Dec 13 14:32:23.287225 env[1304]: time="2024-12-13T14:32:23.286854617Z" level=info msg="cleaning up dead shim" Dec 13 14:32:23.292379 env[1304]: time="2024-12-13T14:32:23.292319393Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:32:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3263 runtime=io.containerd.runc.v2\n" Dec 13 14:32:23.292608 env[1304]: time="2024-12-13T14:32:23.292577709Z" level=info msg="TearDown network for sandbox \"f60a68f4acf03adb0dd86c614d834375614c442d6d7ddcc706bd22d81fc3178a\" successfully" Dec 13 14:32:23.292608 env[1304]: time="2024-12-13T14:32:23.292601935Z" level=info msg="StopPodSandbox for \"f60a68f4acf03adb0dd86c614d834375614c442d6d7ddcc706bd22d81fc3178a\" returns successfully" Dec 13 14:32:23.399903 kubelet[1550]: I1213 14:32:23.399867 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-bpf-maps\") pod \"6d35c297-b994-4717-9bf7-2ff0120dfc34\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " Dec 13 14:32:23.399903 kubelet[1550]: I1213 14:32:23.399901 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-host-proc-sys-net\") pod \"6d35c297-b994-4717-9bf7-2ff0120dfc34\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " Dec 13 14:32:23.400093 kubelet[1550]: I1213 14:32:23.399919 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-hostproc\") pod \"6d35c297-b994-4717-9bf7-2ff0120dfc34\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " Dec 13 14:32:23.400093 kubelet[1550]: I1213 14:32:23.399943 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25t9k\" (UniqueName: \"kubernetes.io/projected/6d35c297-b994-4717-9bf7-2ff0120dfc34-kube-api-access-25t9k\") pod \"6d35c297-b994-4717-9bf7-2ff0120dfc34\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " Dec 13 14:32:23.400093 kubelet[1550]: I1213 14:32:23.399959 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-etc-cni-netd\") pod \"6d35c297-b994-4717-9bf7-2ff0120dfc34\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " Dec 13 14:32:23.400093 kubelet[1550]: I1213 14:32:23.399975 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-xtables-lock\") pod \"6d35c297-b994-4717-9bf7-2ff0120dfc34\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " Dec 13 14:32:23.400093 kubelet[1550]: I1213 14:32:23.399994 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d35c297-b994-4717-9bf7-2ff0120dfc34-cilium-config-path\") pod \"6d35c297-b994-4717-9bf7-2ff0120dfc34\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " Dec 13 14:32:23.400093 kubelet[1550]: I1213 14:32:23.399987 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6d35c297-b994-4717-9bf7-2ff0120dfc34" (UID: "6d35c297-b994-4717-9bf7-2ff0120dfc34"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:32:23.400283 kubelet[1550]: I1213 14:32:23.400014 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d35c297-b994-4717-9bf7-2ff0120dfc34-clustermesh-secrets\") pod \"6d35c297-b994-4717-9bf7-2ff0120dfc34\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " Dec 13 14:32:23.400283 kubelet[1550]: I1213 14:32:23.400021 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6d35c297-b994-4717-9bf7-2ff0120dfc34" (UID: "6d35c297-b994-4717-9bf7-2ff0120dfc34"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:32:23.400283 kubelet[1550]: I1213 14:32:23.400030 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-host-proc-sys-kernel\") pod \"6d35c297-b994-4717-9bf7-2ff0120dfc34\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " Dec 13 14:32:23.400283 kubelet[1550]: I1213 14:32:23.400016 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-hostproc" (OuterVolumeSpecName: "hostproc") pod "6d35c297-b994-4717-9bf7-2ff0120dfc34" (UID: "6d35c297-b994-4717-9bf7-2ff0120dfc34"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:32:23.400283 kubelet[1550]: I1213 14:32:23.400046 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d35c297-b994-4717-9bf7-2ff0120dfc34-hubble-tls\") pod \"6d35c297-b994-4717-9bf7-2ff0120dfc34\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " Dec 13 14:32:23.400405 kubelet[1550]: I1213 14:32:23.400062 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-cni-path\") pod \"6d35c297-b994-4717-9bf7-2ff0120dfc34\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " Dec 13 14:32:23.400405 kubelet[1550]: I1213 14:32:23.400035 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6d35c297-b994-4717-9bf7-2ff0120dfc34" (UID: "6d35c297-b994-4717-9bf7-2ff0120dfc34"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:32:23.400405 kubelet[1550]: I1213 14:32:23.400078 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6d35c297-b994-4717-9bf7-2ff0120dfc34-cilium-ipsec-secrets\") pod \"6d35c297-b994-4717-9bf7-2ff0120dfc34\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " Dec 13 14:32:23.400405 kubelet[1550]: I1213 14:32:23.400083 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6d35c297-b994-4717-9bf7-2ff0120dfc34" (UID: "6d35c297-b994-4717-9bf7-2ff0120dfc34"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:32:23.400405 kubelet[1550]: I1213 14:32:23.400094 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-cilium-run\") pod \"6d35c297-b994-4717-9bf7-2ff0120dfc34\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " Dec 13 14:32:23.400517 kubelet[1550]: I1213 14:32:23.400118 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6d35c297-b994-4717-9bf7-2ff0120dfc34" (UID: "6d35c297-b994-4717-9bf7-2ff0120dfc34"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:32:23.400517 kubelet[1550]: I1213 14:32:23.400135 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-cilium-cgroup\") pod \"6d35c297-b994-4717-9bf7-2ff0120dfc34\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " Dec 13 14:32:23.400517 kubelet[1550]: I1213 14:32:23.400152 1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-lib-modules\") pod \"6d35c297-b994-4717-9bf7-2ff0120dfc34\" (UID: \"6d35c297-b994-4717-9bf7-2ff0120dfc34\") " Dec 13 14:32:23.400517 kubelet[1550]: I1213 14:32:23.400177 1550 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-host-proc-sys-kernel\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:23.400517 kubelet[1550]: I1213 14:32:23.400188 1550 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-bpf-maps\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:23.400517 kubelet[1550]: I1213 14:32:23.400197 1550 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-host-proc-sys-net\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:23.400517 kubelet[1550]: I1213 14:32:23.400204 1550 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-hostproc\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:23.400698 kubelet[1550]: I1213 14:32:23.400212 1550 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-etc-cni-netd\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:23.400698 kubelet[1550]: I1213 14:32:23.400221 1550 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-xtables-lock\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:23.400698 kubelet[1550]: I1213 14:32:23.400237 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6d35c297-b994-4717-9bf7-2ff0120dfc34" (UID: "6d35c297-b994-4717-9bf7-2ff0120dfc34"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:32:23.400766 kubelet[1550]: I1213 14:32:23.400754 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6d35c297-b994-4717-9bf7-2ff0120dfc34" (UID: "6d35c297-b994-4717-9bf7-2ff0120dfc34"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:32:23.400794 kubelet[1550]: I1213 14:32:23.400777 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6d35c297-b994-4717-9bf7-2ff0120dfc34" (UID: "6d35c297-b994-4717-9bf7-2ff0120dfc34"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:32:23.400853 kubelet[1550]: I1213 14:32:23.400828 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-cni-path" (OuterVolumeSpecName: "cni-path") pod "6d35c297-b994-4717-9bf7-2ff0120dfc34" (UID: "6d35c297-b994-4717-9bf7-2ff0120dfc34"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:32:23.404647 kubelet[1550]: I1213 14:32:23.402428 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d35c297-b994-4717-9bf7-2ff0120dfc34-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6d35c297-b994-4717-9bf7-2ff0120dfc34" (UID: "6d35c297-b994-4717-9bf7-2ff0120dfc34"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:32:23.404647 kubelet[1550]: I1213 14:32:23.402555 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d35c297-b994-4717-9bf7-2ff0120dfc34-kube-api-access-25t9k" (OuterVolumeSpecName: "kube-api-access-25t9k") pod "6d35c297-b994-4717-9bf7-2ff0120dfc34" (UID: "6d35c297-b994-4717-9bf7-2ff0120dfc34"). InnerVolumeSpecName "kube-api-access-25t9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:32:23.404647 kubelet[1550]: I1213 14:32:23.404431 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d35c297-b994-4717-9bf7-2ff0120dfc34-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6d35c297-b994-4717-9bf7-2ff0120dfc34" (UID: "6d35c297-b994-4717-9bf7-2ff0120dfc34"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:32:23.403939 systemd[1]: var-lib-kubelet-pods-6d35c297\x2db994\x2d4717\x2d9bf7\x2d2ff0120dfc34-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d25t9k.mount: Deactivated successfully. Dec 13 14:32:23.404062 systemd[1]: var-lib-kubelet-pods-6d35c297\x2db994\x2d4717\x2d9bf7\x2d2ff0120dfc34-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:32:23.405047 kubelet[1550]: I1213 14:32:23.404999 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d35c297-b994-4717-9bf7-2ff0120dfc34-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "6d35c297-b994-4717-9bf7-2ff0120dfc34" (UID: "6d35c297-b994-4717-9bf7-2ff0120dfc34"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:32:23.405598 kubelet[1550]: I1213 14:32:23.405574 1550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d35c297-b994-4717-9bf7-2ff0120dfc34-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6d35c297-b994-4717-9bf7-2ff0120dfc34" (UID: "6d35c297-b994-4717-9bf7-2ff0120dfc34"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:32:23.406291 systemd[1]: var-lib-kubelet-pods-6d35c297\x2db994\x2d4717\x2d9bf7\x2d2ff0120dfc34-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:32:23.406385 systemd[1]: var-lib-kubelet-pods-6d35c297\x2db994\x2d4717\x2d9bf7\x2d2ff0120dfc34-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:32:23.445742 kubelet[1550]: E1213 14:32:23.445686 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:23.501079 kubelet[1550]: I1213 14:32:23.500892 1550 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d35c297-b994-4717-9bf7-2ff0120dfc34-cilium-config-path\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:23.501079 kubelet[1550]: I1213 14:32:23.500938 1550 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d35c297-b994-4717-9bf7-2ff0120dfc34-clustermesh-secrets\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:23.501079 kubelet[1550]: I1213 14:32:23.500949 1550 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d35c297-b994-4717-9bf7-2ff0120dfc34-hubble-tls\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:23.501079 kubelet[1550]: I1213 14:32:23.500959 1550 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-cni-path\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:23.501079 kubelet[1550]: I1213 14:32:23.500967 1550 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6d35c297-b994-4717-9bf7-2ff0120dfc34-cilium-ipsec-secrets\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:23.501079 kubelet[1550]: I1213 14:32:23.500976 1550 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-cilium-run\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:23.501079 kubelet[1550]: I1213 14:32:23.500985 1550 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-cilium-cgroup\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:23.501079 kubelet[1550]: I1213 14:32:23.500993 1550 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d35c297-b994-4717-9bf7-2ff0120dfc34-lib-modules\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:23.501513 kubelet[1550]: I1213 14:32:23.501006 1550 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-25t9k\" (UniqueName: \"kubernetes.io/projected/6d35c297-b994-4717-9bf7-2ff0120dfc34-kube-api-access-25t9k\") on node \"10.0.0.142\" DevicePath \"\"" Dec 13 14:32:24.266160 kubelet[1550]: I1213 14:32:24.266132 1550 scope.go:117] "RemoveContainer" containerID="6393a8bef034c0e5466cc225fdb0a844c3778fe68e85d36ce987829a643ff347" Dec 13 14:32:24.267336 env[1304]: time="2024-12-13T14:32:24.267292541Z" level=info msg="RemoveContainer for \"6393a8bef034c0e5466cc225fdb0a844c3778fe68e85d36ce987829a643ff347\"" Dec 13 14:32:24.309943 env[1304]: time="2024-12-13T14:32:24.309890895Z" level=info msg="RemoveContainer for \"6393a8bef034c0e5466cc225fdb0a844c3778fe68e85d36ce987829a643ff347\" returns successfully" Dec 13 14:32:24.311013 kubelet[1550]: I1213 14:32:24.310986 1550 topology_manager.go:215] "Topology Admit Handler" podUID="575244cd-d3fa-47f1-9029-1cdabff1c63a" podNamespace="kube-system" podName="cilium-wld45" Dec 13 14:32:24.311103 kubelet[1550]: E1213 14:32:24.311040 1550 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d35c297-b994-4717-9bf7-2ff0120dfc34" containerName="mount-cgroup" Dec 13 14:32:24.311103 kubelet[1550]: I1213 14:32:24.311070 1550 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d35c297-b994-4717-9bf7-2ff0120dfc34" containerName="mount-cgroup" Dec 13 14:32:24.405721 kubelet[1550]: I1213 14:32:24.405671 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/575244cd-d3fa-47f1-9029-1cdabff1c63a-cilium-run\") pod \"cilium-wld45\" (UID: \"575244cd-d3fa-47f1-9029-1cdabff1c63a\") " pod="kube-system/cilium-wld45" Dec 13 14:32:24.405721 kubelet[1550]: I1213 14:32:24.405724 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/575244cd-d3fa-47f1-9029-1cdabff1c63a-bpf-maps\") pod \"cilium-wld45\" (UID: \"575244cd-d3fa-47f1-9029-1cdabff1c63a\") " pod="kube-system/cilium-wld45" Dec 13 14:32:24.405721 kubelet[1550]: I1213 14:32:24.405743 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/575244cd-d3fa-47f1-9029-1cdabff1c63a-cilium-ipsec-secrets\") pod \"cilium-wld45\" (UID: \"575244cd-d3fa-47f1-9029-1cdabff1c63a\") " pod="kube-system/cilium-wld45" Dec 13 14:32:24.405960 kubelet[1550]: I1213 14:32:24.405785 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/575244cd-d3fa-47f1-9029-1cdabff1c63a-host-proc-sys-net\") pod \"cilium-wld45\" (UID: \"575244cd-d3fa-47f1-9029-1cdabff1c63a\") " pod="kube-system/cilium-wld45" Dec 13 14:32:24.405960 kubelet[1550]: I1213 14:32:24.405875 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/575244cd-d3fa-47f1-9029-1cdabff1c63a-cilium-config-path\") pod \"cilium-wld45\" (UID: \"575244cd-d3fa-47f1-9029-1cdabff1c63a\") " pod="kube-system/cilium-wld45" Dec 13 14:32:24.405960 kubelet[1550]: I1213 14:32:24.405922 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/575244cd-d3fa-47f1-9029-1cdabff1c63a-xtables-lock\") pod \"cilium-wld45\" (UID: \"575244cd-d3fa-47f1-9029-1cdabff1c63a\") " pod="kube-system/cilium-wld45" Dec 13 14:32:24.405960 kubelet[1550]: I1213 14:32:24.405950 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/575244cd-d3fa-47f1-9029-1cdabff1c63a-cilium-cgroup\") pod \"cilium-wld45\" (UID: \"575244cd-d3fa-47f1-9029-1cdabff1c63a\") " pod="kube-system/cilium-wld45" Dec 13 14:32:24.406049 kubelet[1550]: I1213 14:32:24.405966 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/575244cd-d3fa-47f1-9029-1cdabff1c63a-clustermesh-secrets\") pod \"cilium-wld45\" (UID: \"575244cd-d3fa-47f1-9029-1cdabff1c63a\") " pod="kube-system/cilium-wld45" Dec 13 14:32:24.406049 kubelet[1550]: I1213 14:32:24.405982 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/575244cd-d3fa-47f1-9029-1cdabff1c63a-etc-cni-netd\") pod \"cilium-wld45\" (UID: \"575244cd-d3fa-47f1-9029-1cdabff1c63a\") " pod="kube-system/cilium-wld45" Dec 13 14:32:24.406049 kubelet[1550]: I1213 14:32:24.406003 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/575244cd-d3fa-47f1-9029-1cdabff1c63a-host-proc-sys-kernel\") pod \"cilium-wld45\" (UID: \"575244cd-d3fa-47f1-9029-1cdabff1c63a\") " pod="kube-system/cilium-wld45" Dec 13 14:32:24.406049 kubelet[1550]: I1213 14:32:24.406043 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn5hm\" (UniqueName: \"kubernetes.io/projected/575244cd-d3fa-47f1-9029-1cdabff1c63a-kube-api-access-pn5hm\") pod \"cilium-wld45\" (UID: \"575244cd-d3fa-47f1-9029-1cdabff1c63a\") " pod="kube-system/cilium-wld45" Dec 13 14:32:24.406190 kubelet[1550]: I1213 14:32:24.406077 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/575244cd-d3fa-47f1-9029-1cdabff1c63a-hubble-tls\") pod \"cilium-wld45\" (UID: \"575244cd-d3fa-47f1-9029-1cdabff1c63a\") " pod="kube-system/cilium-wld45" Dec 13 14:32:24.406190 kubelet[1550]: I1213 14:32:24.406108 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/575244cd-d3fa-47f1-9029-1cdabff1c63a-hostproc\") pod \"cilium-wld45\" (UID: \"575244cd-d3fa-47f1-9029-1cdabff1c63a\") " pod="kube-system/cilium-wld45" Dec 13 14:32:24.406190 kubelet[1550]: I1213 14:32:24.406142 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/575244cd-d3fa-47f1-9029-1cdabff1c63a-cni-path\") pod \"cilium-wld45\" (UID: \"575244cd-d3fa-47f1-9029-1cdabff1c63a\") " pod="kube-system/cilium-wld45" Dec 13 14:32:24.406190 kubelet[1550]: I1213 14:32:24.406168 1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/575244cd-d3fa-47f1-9029-1cdabff1c63a-lib-modules\") pod \"cilium-wld45\" (UID: \"575244cd-d3fa-47f1-9029-1cdabff1c63a\") " pod="kube-system/cilium-wld45" Dec 13 14:32:24.445935 kubelet[1550]: E1213 14:32:24.445869 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:24.455150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2100243990.mount: Deactivated successfully. Dec 13 14:32:24.614253 kubelet[1550]: E1213 14:32:24.614219 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:32:24.614826 env[1304]: time="2024-12-13T14:32:24.614776335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wld45,Uid:575244cd-d3fa-47f1-9029-1cdabff1c63a,Namespace:kube-system,Attempt:0,}" Dec 13 14:32:24.626524 env[1304]: time="2024-12-13T14:32:24.626385188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:32:24.626524 env[1304]: time="2024-12-13T14:32:24.626417039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:32:24.626524 env[1304]: time="2024-12-13T14:32:24.626426628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:32:24.627313 env[1304]: time="2024-12-13T14:32:24.626738396Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/410489082a2846a0bc8cd605292fa5f4ad2bf4be9b44fadc446889443d35106f pid=3292 runtime=io.containerd.runc.v2 Dec 13 14:32:24.653802 env[1304]: time="2024-12-13T14:32:24.653744102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wld45,Uid:575244cd-d3fa-47f1-9029-1cdabff1c63a,Namespace:kube-system,Attempt:0,} returns sandbox id \"410489082a2846a0bc8cd605292fa5f4ad2bf4be9b44fadc446889443d35106f\"" Dec 13 14:32:24.654291 kubelet[1550]: E1213 14:32:24.654272 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:32:24.656081 env[1304]: time="2024-12-13T14:32:24.656049179Z" level=info msg="CreateContainer within sandbox \"410489082a2846a0bc8cd605292fa5f4ad2bf4be9b44fadc446889443d35106f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:32:24.667240 env[1304]: time="2024-12-13T14:32:24.667185778Z" level=info msg="CreateContainer within sandbox \"410489082a2846a0bc8cd605292fa5f4ad2bf4be9b44fadc446889443d35106f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"39cfb7cb8a69a88fa5f2c48fe2481b5bdae0fc111d5edecb6f5281d2292253a8\"" Dec 13 14:32:24.667938 env[1304]: time="2024-12-13T14:32:24.667900597Z" level=info msg="StartContainer for \"39cfb7cb8a69a88fa5f2c48fe2481b5bdae0fc111d5edecb6f5281d2292253a8\"" Dec 13 14:32:24.706291 env[1304]: time="2024-12-13T14:32:24.706233609Z" level=info msg="StartContainer for \"39cfb7cb8a69a88fa5f2c48fe2481b5bdae0fc111d5edecb6f5281d2292253a8\" returns successfully" Dec 13 14:32:24.803985 env[1304]: time="2024-12-13T14:32:24.803927111Z" level=info msg="shim disconnected" id=39cfb7cb8a69a88fa5f2c48fe2481b5bdae0fc111d5edecb6f5281d2292253a8 Dec 13 14:32:24.803985 env[1304]: time="2024-12-13T14:32:24.803982889Z" level=warning msg="cleaning up after shim disconnected" id=39cfb7cb8a69a88fa5f2c48fe2481b5bdae0fc111d5edecb6f5281d2292253a8 namespace=k8s.io Dec 13 14:32:24.803985 env[1304]: time="2024-12-13T14:32:24.803991315Z" level=info msg="cleaning up dead shim" Dec 13 14:32:24.810853 env[1304]: time="2024-12-13T14:32:24.810805692Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:32:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3374 runtime=io.containerd.runc.v2\n" Dec 13 14:32:25.057683 env[1304]: time="2024-12-13T14:32:25.057599585Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:25.059356 env[1304]: time="2024-12-13T14:32:25.059314860Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:25.060661 env[1304]: time="2024-12-13T14:32:25.060630390Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:25.061099 env[1304]: time="2024-12-13T14:32:25.061068819Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:32:25.062589 env[1304]: time="2024-12-13T14:32:25.062566418Z" level=info msg="CreateContainer within sandbox \"757e833e41015415d1c9ce00f51099be472a1658264ecf1fdec2e09da635cf8e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:32:25.072214 env[1304]: time="2024-12-13T14:32:25.072178458Z" level=info msg="CreateContainer within sandbox \"757e833e41015415d1c9ce00f51099be472a1658264ecf1fdec2e09da635cf8e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bc7dc16ed454e88f2d260e57553d9ee5aa086ded964b5c20ebcfcc7c0eacd757\"" Dec 13 14:32:25.072479 env[1304]: time="2024-12-13T14:32:25.072443626Z" level=info msg="StartContainer for \"bc7dc16ed454e88f2d260e57553d9ee5aa086ded964b5c20ebcfcc7c0eacd757\"" Dec 13 14:32:25.219487 env[1304]: time="2024-12-13T14:32:25.219414221Z" level=info msg="StartContainer for \"bc7dc16ed454e88f2d260e57553d9ee5aa086ded964b5c20ebcfcc7c0eacd757\" returns successfully" Dec 13 14:32:25.270176 kubelet[1550]: E1213 14:32:25.270146 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:32:25.271447 kubelet[1550]: E1213 14:32:25.271418 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:32:25.271715 env[1304]: time="2024-12-13T14:32:25.271675388Z" level=info msg="CreateContainer within sandbox \"410489082a2846a0bc8cd605292fa5f4ad2bf4be9b44fadc446889443d35106f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:32:25.284449 env[1304]: time="2024-12-13T14:32:25.284389319Z" level=info msg="CreateContainer within sandbox \"410489082a2846a0bc8cd605292fa5f4ad2bf4be9b44fadc446889443d35106f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cc49807354ddfefb5c3334916e6425b99721567ce1d55b51f4890a9bcca5985f\"" Dec 13 14:32:25.285022 env[1304]: time="2024-12-13T14:32:25.284980772Z" level=info msg="StartContainer for \"cc49807354ddfefb5c3334916e6425b99721567ce1d55b51f4890a9bcca5985f\"" Dec 13 14:32:25.290063 kubelet[1550]: I1213 14:32:25.290028 1550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-stdln" podStartSLOduration=2.056034259 podStartE2EDuration="4.289985207s" podCreationTimestamp="2024-12-13 14:32:21 +0000 UTC" firstStartedPulling="2024-12-13 14:32:22.827374483 +0000 UTC m=+65.760850431" lastFinishedPulling="2024-12-13 14:32:25.061325431 +0000 UTC m=+67.994801379" observedRunningTime="2024-12-13 14:32:25.289775185 +0000 UTC m=+68.223251133" watchObservedRunningTime="2024-12-13 14:32:25.289985207 +0000 UTC m=+68.223461155" Dec 13 14:32:25.328979 env[1304]: time="2024-12-13T14:32:25.328916633Z" level=info msg="StartContainer for \"cc49807354ddfefb5c3334916e6425b99721567ce1d55b51f4890a9bcca5985f\" returns successfully" Dec 13 14:32:25.351231 env[1304]: time="2024-12-13T14:32:25.351178321Z" level=info msg="shim disconnected" id=cc49807354ddfefb5c3334916e6425b99721567ce1d55b51f4890a9bcca5985f Dec 13 14:32:25.351231 env[1304]: time="2024-12-13T14:32:25.351226213Z" level=warning msg="cleaning up after shim disconnected" id=cc49807354ddfefb5c3334916e6425b99721567ce1d55b51f4890a9bcca5985f namespace=k8s.io Dec 13 14:32:25.351231 env[1304]: time="2024-12-13T14:32:25.351234208Z" level=info msg="cleaning up dead shim" Dec 13 14:32:25.357090 env[1304]: time="2024-12-13T14:32:25.357062141Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:32:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3473 runtime=io.containerd.runc.v2\n" Dec 13 14:32:25.446082 kubelet[1550]: E1213 14:32:25.446033 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:26.133205 kubelet[1550]: I1213 14:32:26.133164 1550 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6d35c297-b994-4717-9bf7-2ff0120dfc34" path="/var/lib/kubelet/pods/6d35c297-b994-4717-9bf7-2ff0120dfc34/volumes" Dec 13 14:32:26.274576 kubelet[1550]: E1213 14:32:26.274546 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:32:26.275332 kubelet[1550]: E1213 14:32:26.275311 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:32:26.276465 env[1304]: time="2024-12-13T14:32:26.276426668Z" level=info msg="CreateContainer within sandbox \"410489082a2846a0bc8cd605292fa5f4ad2bf4be9b44fadc446889443d35106f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:32:26.291225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2444318042.mount: Deactivated successfully. Dec 13 14:32:26.293885 env[1304]: time="2024-12-13T14:32:26.293839105Z" level=info msg="CreateContainer within sandbox \"410489082a2846a0bc8cd605292fa5f4ad2bf4be9b44fadc446889443d35106f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8a6a0eee832feddd7b58d252ecba37b5c340353b0f1e551001ae93cf8d220012\"" Dec 13 14:32:26.294369 env[1304]: time="2024-12-13T14:32:26.294337790Z" level=info msg="StartContainer for \"8a6a0eee832feddd7b58d252ecba37b5c340353b0f1e551001ae93cf8d220012\"" Dec 13 14:32:26.338012 env[1304]: time="2024-12-13T14:32:26.337967902Z" level=info msg="StartContainer for \"8a6a0eee832feddd7b58d252ecba37b5c340353b0f1e551001ae93cf8d220012\" returns successfully" Dec 13 14:32:26.356445 env[1304]: time="2024-12-13T14:32:26.356392226Z" level=info msg="shim disconnected" id=8a6a0eee832feddd7b58d252ecba37b5c340353b0f1e551001ae93cf8d220012 Dec 13 14:32:26.356445 env[1304]: time="2024-12-13T14:32:26.356444236Z" level=warning msg="cleaning up after shim disconnected" id=8a6a0eee832feddd7b58d252ecba37b5c340353b0f1e551001ae93cf8d220012 namespace=k8s.io Dec 13 14:32:26.356666 env[1304]: time="2024-12-13T14:32:26.356454957Z" level=info msg="cleaning up dead shim" Dec 13 14:32:26.363031 env[1304]: time="2024-12-13T14:32:26.362982874Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:32:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3529 runtime=io.containerd.runc.v2\n" Dec 13 14:32:26.446559 kubelet[1550]: E1213 14:32:26.446401 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:26.451971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a6a0eee832feddd7b58d252ecba37b5c340353b0f1e551001ae93cf8d220012-rootfs.mount: Deactivated successfully. Dec 13 14:32:27.277949 kubelet[1550]: E1213 14:32:27.277919 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:32:27.279514 env[1304]: time="2024-12-13T14:32:27.279449490Z" level=info msg="CreateContainer within sandbox \"410489082a2846a0bc8cd605292fa5f4ad2bf4be9b44fadc446889443d35106f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:32:27.446902 kubelet[1550]: E1213 14:32:27.446831 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:27.473819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4224002917.mount: Deactivated successfully. Dec 13 14:32:27.572932 env[1304]: time="2024-12-13T14:32:27.572562943Z" level=info msg="CreateContainer within sandbox \"410489082a2846a0bc8cd605292fa5f4ad2bf4be9b44fadc446889443d35106f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"13339448371bcc9fc7731b804f4df4a19636d339c6cb7c91ab4ca094eb818b95\"" Dec 13 14:32:27.573482 env[1304]: time="2024-12-13T14:32:27.573158683Z" level=info msg="StartContainer for \"13339448371bcc9fc7731b804f4df4a19636d339c6cb7c91ab4ca094eb818b95\"" Dec 13 14:32:27.612834 env[1304]: time="2024-12-13T14:32:27.612781755Z" level=info msg="StartContainer for \"13339448371bcc9fc7731b804f4df4a19636d339c6cb7c91ab4ca094eb818b95\" returns successfully" Dec 13 14:32:27.626329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13339448371bcc9fc7731b804f4df4a19636d339c6cb7c91ab4ca094eb818b95-rootfs.mount: Deactivated successfully. Dec 13 14:32:27.629526 env[1304]: time="2024-12-13T14:32:27.629470395Z" level=info msg="shim disconnected" id=13339448371bcc9fc7731b804f4df4a19636d339c6cb7c91ab4ca094eb818b95 Dec 13 14:32:27.629606 env[1304]: time="2024-12-13T14:32:27.629525230Z" level=warning msg="cleaning up after shim disconnected" id=13339448371bcc9fc7731b804f4df4a19636d339c6cb7c91ab4ca094eb818b95 namespace=k8s.io Dec 13 14:32:27.629606 env[1304]: time="2024-12-13T14:32:27.629538415Z" level=info msg="cleaning up dead shim" Dec 13 14:32:27.635829 env[1304]: time="2024-12-13T14:32:27.635800089Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:32:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3583 runtime=io.containerd.runc.v2\n" Dec 13 14:32:28.242994 kubelet[1550]: E1213 14:32:28.242958 1550 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:32:28.281613 kubelet[1550]: E1213 14:32:28.281586 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:32:28.283672 env[1304]: time="2024-12-13T14:32:28.283590571Z" level=info msg="CreateContainer within sandbox \"410489082a2846a0bc8cd605292fa5f4ad2bf4be9b44fadc446889443d35106f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:32:28.298111 env[1304]: time="2024-12-13T14:32:28.298056998Z" level=info msg="CreateContainer within sandbox \"410489082a2846a0bc8cd605292fa5f4ad2bf4be9b44fadc446889443d35106f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9da6a5a09a98d035d0becbae44dd8ea47167b1b9c71d6082c34f9cba2d890b14\"" Dec 13 14:32:28.298533 env[1304]: time="2024-12-13T14:32:28.298489335Z" level=info msg="StartContainer for \"9da6a5a09a98d035d0becbae44dd8ea47167b1b9c71d6082c34f9cba2d890b14\"" Dec 13 14:32:28.438274 env[1304]: time="2024-12-13T14:32:28.438212830Z" level=info msg="StartContainer for \"9da6a5a09a98d035d0becbae44dd8ea47167b1b9c71d6082c34f9cba2d890b14\" returns successfully" Dec 13 14:32:28.447819 kubelet[1550]: E1213 14:32:28.447768 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:28.604288 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:32:29.285637 kubelet[1550]: E1213 14:32:29.285597 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:32:29.297919 kubelet[1550]: I1213 14:32:29.297899 1550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-wld45" podStartSLOduration=5.297870746 podStartE2EDuration="5.297870746s" podCreationTimestamp="2024-12-13 14:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:32:29.297698806 +0000 UTC m=+72.231174754" watchObservedRunningTime="2024-12-13 14:32:29.297870746 +0000 UTC m=+72.231346694" Dec 13 14:32:29.448682 kubelet[1550]: E1213 14:32:29.448645 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:30.236284 kubelet[1550]: I1213 14:32:30.236221 1550 setters.go:568] "Node became not ready" node="10.0.0.142" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:32:30Z","lastTransitionTime":"2024-12-13T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:32:30.449806 kubelet[1550]: E1213 14:32:30.449746 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:30.615781 kubelet[1550]: E1213 14:32:30.615747 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:32:31.123830 systemd-networkd[1085]: lxc_health: Link UP Dec 13 14:32:31.133351 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:32:31.133863 systemd-networkd[1085]: lxc_health: Gained carrier Dec 13 14:32:31.450254 kubelet[1550]: E1213 14:32:31.450124 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:32.438356 systemd-networkd[1085]: lxc_health: Gained IPv6LL Dec 13 14:32:32.450786 kubelet[1550]: E1213 14:32:32.450716 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:32.615986 kubelet[1550]: E1213 14:32:32.615954 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:32:33.292205 kubelet[1550]: E1213 14:32:33.292168 1550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:32:33.451909 kubelet[1550]: E1213 14:32:33.451853 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:34.453067 kubelet[1550]: E1213 14:32:34.452960 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:35.453305 kubelet[1550]: E1213 14:32:35.453249 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:36.453592 kubelet[1550]: E1213 14:32:36.453529 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:36.666373 systemd[1]: run-containerd-runc-k8s.io-9da6a5a09a98d035d0becbae44dd8ea47167b1b9c71d6082c34f9cba2d890b14-runc.6ERtxI.mount: Deactivated successfully. Dec 13 14:32:37.402800 kubelet[1550]: E1213 14:32:37.402748 1550 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:37.453924 kubelet[1550]: E1213 14:32:37.453881 1550 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"