Jan 29 16:25:32.867137 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025
Jan 29 16:25:32.867161 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d
Jan 29 16:25:32.867172 kernel: BIOS-provided physical RAM map:
Jan 29 16:25:32.867179 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 29 16:25:32.867185 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 29 16:25:32.867191 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 29 16:25:32.867199 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable
Jan 29 16:25:32.867205 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved
Jan 29 16:25:32.867212 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
Jan 29 16:25:32.867220 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
Jan 29 16:25:32.867227 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 29 16:25:32.867233 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 29 16:25:32.867240 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved
Jan 29 16:25:32.867246 kernel: NX (Execute Disable) protection: active
Jan 29 16:25:32.867254 kernel: APIC: Static calls initialized
Jan 29 16:25:32.867266 kernel: SMBIOS 2.8 present.
Jan 29 16:25:32.867276 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Jan 29 16:25:32.867285 kernel: Hypervisor detected: KVM
Jan 29 16:25:32.867308 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 29 16:25:32.867316 kernel: kvm-clock: using sched offset of 2285340279 cycles
Jan 29 16:25:32.867323 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 29 16:25:32.867331 kernel: tsc: Detected 2794.748 MHz processor
Jan 29 16:25:32.867339 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 29 16:25:32.867346 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 29 16:25:32.867353 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000
Jan 29 16:25:32.867364 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 29 16:25:32.867371 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 29 16:25:32.867379 kernel: Using GB pages for direct mapping
Jan 29 16:25:32.867386 kernel: ACPI: Early table checksum verification disabled
Jan 29 16:25:32.867393 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS )
Jan 29 16:25:32.867400 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 16:25:32.867408 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 16:25:32.867415 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 16:25:32.867422 kernel: ACPI: FACS 0x000000009CFE0000 000040
Jan 29 16:25:32.867432 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 16:25:32.867439 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 16:25:32.867446 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 16:25:32.867454 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 16:25:32.867461 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db]
Jan 29 16:25:32.867468 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7]
Jan 29 16:25:32.867479 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f]
Jan 29 16:25:32.867488 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b]
Jan 29 16:25:32.867495 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3]
Jan 29 16:25:32.867503 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df]
Jan 29 16:25:32.867510 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407]
Jan 29 16:25:32.867517 kernel: No NUMA configuration found
Jan 29 16:25:32.867524 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff]
Jan 29 16:25:32.867532 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff]
Jan 29 16:25:32.867541 kernel: Zone ranges:
Jan 29 16:25:32.867549 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 29 16:25:32.867556 kernel:   DMA32    [mem 0x0000000001000000-0x000000009cfdbfff]
Jan 29 16:25:32.867563 kernel:   Normal   empty
Jan 29 16:25:32.867571 kernel: Movable zone start for each node
Jan 29 16:25:32.867578 kernel: Early memory node ranges
Jan 29 16:25:32.867585 kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 29 16:25:32.867593 kernel:   node   0: [mem 0x0000000000100000-0x000000009cfdbfff]
Jan 29 16:25:32.867600 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff]
Jan 29 16:25:32.867610 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 29 16:25:32.867617 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 29 16:25:32.867625 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges
Jan 29 16:25:32.867632 kernel: ACPI: PM-Timer IO Port: 0x608
Jan 29 16:25:32.867639 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 29 16:25:32.867647 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 29 16:25:32.867654 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 29 16:25:32.867661 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 29 16:25:32.867669 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 29 16:25:32.867678 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 29 16:25:32.867686 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 29 16:25:32.867693 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 29 16:25:32.867701 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Jan 29 16:25:32.867708 kernel: TSC deadline timer available
Jan 29 16:25:32.867715 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs
Jan 29 16:25:32.867723 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 29 16:25:32.867730 kernel: kvm-guest: KVM setup pv remote TLB flush
Jan 29 16:25:32.867737 kernel: kvm-guest: setup PV sched yield
Jan 29 16:25:32.867745 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices
Jan 29 16:25:32.867754 kernel: Booting paravirtualized kernel on KVM
Jan 29 16:25:32.867762 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 29 16:25:32.867769 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Jan 29 16:25:32.867777 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288
Jan 29 16:25:32.867791 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152
Jan 29 16:25:32.867799 kernel: pcpu-alloc: [0] 0 1 2 3 
Jan 29 16:25:32.867806 kernel: kvm-guest: PV spinlocks enabled
Jan 29 16:25:32.867814 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Jan 29 16:25:32.867822 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d
Jan 29 16:25:32.867833 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Jan 29 16:25:32.867840 kernel: random: crng init done
Jan 29 16:25:32.867847 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 29 16:25:32.867855 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Jan 29 16:25:32.867862 kernel: Fallback order for Node 0: 0 
Jan 29 16:25:32.867870 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 632732
Jan 29 16:25:32.867877 kernel: Policy zone: DMA32
Jan 29 16:25:32.867885 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 29 16:25:32.867894 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 138948K reserved, 0K cma-reserved)
Jan 29 16:25:32.867902 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Jan 29 16:25:32.867909 kernel: ftrace: allocating 37893 entries in 149 pages
Jan 29 16:25:32.867917 kernel: ftrace: allocated 149 pages with 4 groups
Jan 29 16:25:32.867924 kernel: Dynamic Preempt: voluntary
Jan 29 16:25:32.867931 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 29 16:25:32.867939 kernel: rcu:         RCU event tracing is enabled.
Jan 29 16:25:32.867947 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4.
Jan 29 16:25:32.867954 kernel:         Trampoline variant of Tasks RCU enabled.
Jan 29 16:25:32.867964 kernel:         Rude variant of Tasks RCU enabled.
Jan 29 16:25:32.867971 kernel:         Tracing variant of Tasks RCU enabled.
Jan 29 16:25:32.867979 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 29 16:25:32.867986 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Jan 29 16:25:32.867993 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16
Jan 29 16:25:32.868001 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 29 16:25:32.868008 kernel: Console: colour VGA+ 80x25
Jan 29 16:25:32.868015 kernel: printk: console [ttyS0] enabled
Jan 29 16:25:32.868022 kernel: ACPI: Core revision 20230628
Jan 29 16:25:32.868032 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns
Jan 29 16:25:32.868039 kernel: APIC: Switch to symmetric I/O mode setup
Jan 29 16:25:32.868047 kernel: x2apic enabled
Jan 29 16:25:32.868054 kernel: APIC: Switched APIC routing to: physical x2apic
Jan 29 16:25:32.868061 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask()
Jan 29 16:25:32.868069 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself()
Jan 29 16:25:32.868076 kernel: kvm-guest: setup PV IPIs
Jan 29 16:25:32.868092 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Jan 29 16:25:32.868100 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 29 16:25:32.868108 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748)
Jan 29 16:25:32.868115 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 29 16:25:32.868123 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 29 16:25:32.868132 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 29 16:25:32.868140 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 29 16:25:32.868148 kernel: Spectre V2 : Mitigation: Retpolines
Jan 29 16:25:32.868156 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Jan 29 16:25:32.868163 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Jan 29 16:25:32.868173 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 29 16:25:32.868181 kernel: RETBleed: Mitigation: untrained return thunk
Jan 29 16:25:32.868189 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 29 16:25:32.868201 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 29 16:25:32.868208 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 29 16:25:32.868217 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 29 16:25:32.868224 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 29 16:25:32.868232 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 29 16:25:32.868242 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 29 16:25:32.868249 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 29 16:25:32.868257 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 29 16:25:32.868265 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 29 16:25:32.868272 kernel: Freeing SMP alternatives memory: 32K
Jan 29 16:25:32.868280 kernel: pid_max: default: 32768 minimum: 301
Jan 29 16:25:32.868287 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Jan 29 16:25:32.868307 kernel: landlock: Up and running.
Jan 29 16:25:32.868314 kernel: SELinux:  Initializing.
Jan 29 16:25:32.868325 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Jan 29 16:25:32.868332 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Jan 29 16:25:32.868340 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 29 16:25:32.868348 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Jan 29 16:25:32.868356 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Jan 29 16:25:32.868364 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Jan 29 16:25:32.868371 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 29 16:25:32.868379 kernel: ... version:                0
Jan 29 16:25:32.868386 kernel: ... bit width:              48
Jan 29 16:25:32.868396 kernel: ... generic registers:      6
Jan 29 16:25:32.868404 kernel: ... value mask:             0000ffffffffffff
Jan 29 16:25:32.868412 kernel: ... max period:             00007fffffffffff
Jan 29 16:25:32.868419 kernel: ... fixed-purpose events:   0
Jan 29 16:25:32.868427 kernel: ... event mask:             000000000000003f
Jan 29 16:25:32.868434 kernel: signal: max sigframe size: 1776
Jan 29 16:25:32.868442 kernel: rcu: Hierarchical SRCU implementation.
Jan 29 16:25:32.868450 kernel: rcu:         Max phase no-delay instances is 400.
Jan 29 16:25:32.868458 kernel: smp: Bringing up secondary CPUs ...
Jan 29 16:25:32.868467 kernel: smpboot: x86: Booting SMP configuration:
Jan 29 16:25:32.868475 kernel: .... node  #0, CPUs:      #1 #2 #3
Jan 29 16:25:32.868483 kernel: smp: Brought up 1 node, 4 CPUs
Jan 29 16:25:32.868490 kernel: smpboot: Max logical packages: 1
Jan 29 16:25:32.868498 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS)
Jan 29 16:25:32.868505 kernel: devtmpfs: initialized
Jan 29 16:25:32.868513 kernel: x86/mm: Memory block size: 128MB
Jan 29 16:25:32.868521 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 29 16:25:32.868529 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Jan 29 16:25:32.868538 kernel: pinctrl core: initialized pinctrl subsystem
Jan 29 16:25:32.868546 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 29 16:25:32.868554 kernel: audit: initializing netlink subsys (disabled)
Jan 29 16:25:32.868561 kernel: audit: type=2000 audit(1738167933.128:1): state=initialized audit_enabled=0 res=1
Jan 29 16:25:32.868569 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 29 16:25:32.868576 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 29 16:25:32.868584 kernel: cpuidle: using governor menu
Jan 29 16:25:32.868592 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 29 16:25:32.868599 kernel: dca service started, version 1.12.1
Jan 29 16:25:32.868609 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000)
Jan 29 16:25:32.868617 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry
Jan 29 16:25:32.868625 kernel: PCI: Using configuration type 1 for base access
Jan 29 16:25:32.868633 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 29 16:25:32.868640 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 29 16:25:32.868648 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 29 16:25:32.868656 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 29 16:25:32.868663 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 29 16:25:32.868671 kernel: ACPI: Added _OSI(Module Device)
Jan 29 16:25:32.868681 kernel: ACPI: Added _OSI(Processor Device)
Jan 29 16:25:32.868688 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Jan 29 16:25:32.868696 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 29 16:25:32.868703 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 29 16:25:32.868711 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Jan 29 16:25:32.868718 kernel: ACPI: Interpreter enabled
Jan 29 16:25:32.868726 kernel: ACPI: PM: (supports S0 S3 S5)
Jan 29 16:25:32.868734 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 29 16:25:32.868741 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 29 16:25:32.868751 kernel: PCI: Using E820 reservations for host bridge windows
Jan 29 16:25:32.868759 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F
Jan 29 16:25:32.868766 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 29 16:25:32.869037 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Jan 29 16:25:32.869236 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR]
Jan 29 16:25:32.869414 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability]
Jan 29 16:25:32.869427 kernel: PCI host bridge to bus 0000:00
Jan 29 16:25:32.869563 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 29 16:25:32.869684 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 29 16:25:32.869810 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 29 16:25:32.869930 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window]
Jan 29 16:25:32.870043 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 29 16:25:32.870155 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window]
Jan 29 16:25:32.870271 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 29 16:25:32.870431 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000
Jan 29 16:25:32.870757 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000
Jan 29 16:25:32.870901 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref]
Jan 29 16:25:32.871025 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff]
Jan 29 16:25:32.871147 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref]
Jan 29 16:25:32.871269 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 29 16:25:32.871445 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00
Jan 29 16:25:32.871607 kernel: pci 0000:00:02.0: reg 0x10: [io  0xc0c0-0xc0df]
Jan 29 16:25:32.871732 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff]
Jan 29 16:25:32.871866 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref]
Jan 29 16:25:32.872000 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000
Jan 29 16:25:32.872127 kernel: pci 0000:00:03.0: reg 0x10: [io  0xc000-0xc07f]
Jan 29 16:25:32.872250 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff]
Jan 29 16:25:32.872387 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref]
Jan 29 16:25:32.872525 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000
Jan 29 16:25:32.872649 kernel: pci 0000:00:04.0: reg 0x10: [io  0xc0e0-0xc0ff]
Jan 29 16:25:32.872773 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff]
Jan 29 16:25:32.872912 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref]
Jan 29 16:25:32.873035 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref]
Jan 29 16:25:32.873167 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100
Jan 29 16:25:32.873322 kernel: pci 0000:00:1f.0: quirk: [io  0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO
Jan 29 16:25:32.873466 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601
Jan 29 16:25:32.873591 kernel: pci 0000:00:1f.2: reg 0x20: [io  0xc100-0xc11f]
Jan 29 16:25:32.873715 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff]
Jan 29 16:25:32.873859 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500
Jan 29 16:25:32.873984 kernel: pci 0000:00:1f.3: reg 0x20: [io  0x0700-0x073f]
Jan 29 16:25:32.873995 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 29 16:25:32.874007 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 29 16:25:32.874014 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 29 16:25:32.874022 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 29 16:25:32.874030 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10
Jan 29 16:25:32.874037 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10
Jan 29 16:25:32.874045 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11
Jan 29 16:25:32.874053 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11
Jan 29 16:25:32.874061 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16
Jan 29 16:25:32.874068 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17
Jan 29 16:25:32.874079 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18
Jan 29 16:25:32.874086 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19
Jan 29 16:25:32.874094 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20
Jan 29 16:25:32.874101 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21
Jan 29 16:25:32.874109 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22
Jan 29 16:25:32.874117 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23
Jan 29 16:25:32.874124 kernel: iommu: Default domain type: Translated
Jan 29 16:25:32.874132 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 29 16:25:32.874140 kernel: PCI: Using ACPI for IRQ routing
Jan 29 16:25:32.874150 kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 29 16:25:32.874157 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 29 16:25:32.874165 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff]
Jan 29 16:25:32.874305 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device
Jan 29 16:25:32.874432 kernel: pci 0000:00:01.0: vgaarb: bridge control possible
Jan 29 16:25:32.874554 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 29 16:25:32.874565 kernel: vgaarb: loaded
Jan 29 16:25:32.874573 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
Jan 29 16:25:32.874584 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter
Jan 29 16:25:32.874592 kernel: clocksource: Switched to clocksource kvm-clock
Jan 29 16:25:32.874600 kernel: VFS: Disk quotas dquot_6.6.0
Jan 29 16:25:32.874608 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 29 16:25:32.874616 kernel: pnp: PnP ACPI init
Jan 29 16:25:32.874752 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved
Jan 29 16:25:32.874764 kernel: pnp: PnP ACPI: found 6 devices
Jan 29 16:25:32.874772 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 29 16:25:32.874793 kernel: NET: Registered PF_INET protocol family
Jan 29 16:25:32.874801 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 29 16:25:32.874809 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Jan 29 16:25:32.874817 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 29 16:25:32.874825 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Jan 29 16:25:32.874833 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Jan 29 16:25:32.874840 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Jan 29 16:25:32.874848 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Jan 29 16:25:32.874856 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Jan 29 16:25:32.874866 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 29 16:25:32.874874 kernel: NET: Registered PF_XDP protocol family
Jan 29 16:25:32.874997 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 29 16:25:32.875113 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 29 16:25:32.875228 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 29 16:25:32.875400 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window]
Jan 29 16:25:32.875515 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window]
Jan 29 16:25:32.875628 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window]
Jan 29 16:25:32.875642 kernel: PCI: CLS 0 bytes, default 64
Jan 29 16:25:32.875650 kernel: Initialise system trusted keyrings
Jan 29 16:25:32.875658 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Jan 29 16:25:32.875666 kernel: Key type asymmetric registered
Jan 29 16:25:32.875673 kernel: Asymmetric key parser 'x509' registered
Jan 29 16:25:32.875681 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
Jan 29 16:25:32.875689 kernel: io scheduler mq-deadline registered
Jan 29 16:25:32.875696 kernel: io scheduler kyber registered
Jan 29 16:25:32.875704 kernel: io scheduler bfq registered
Jan 29 16:25:32.875712 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Jan 29 16:25:32.875722 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22
Jan 29 16:25:32.875730 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23
Jan 29 16:25:32.875738 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20
Jan 29 16:25:32.875746 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 29 16:25:32.875753 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 29 16:25:32.875761 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 29 16:25:32.875769 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 29 16:25:32.875777 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 29 16:25:32.875793 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Jan 29 16:25:32.875951 kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 29 16:25:32.876070 kernel: rtc_cmos 00:04: registered as rtc0
Jan 29 16:25:32.876185 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T16:25:32 UTC (1738167932)
Jan 29 16:25:32.876315 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs
Jan 29 16:25:32.876326 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 29 16:25:32.876334 kernel: NET: Registered PF_INET6 protocol family
Jan 29 16:25:32.876341 kernel: Segment Routing with IPv6
Jan 29 16:25:32.876353 kernel: In-situ OAM (IOAM) with IPv6
Jan 29 16:25:32.876361 kernel: NET: Registered PF_PACKET protocol family
Jan 29 16:25:32.876368 kernel: Key type dns_resolver registered
Jan 29 16:25:32.876376 kernel: IPI shorthand broadcast: enabled
Jan 29 16:25:32.876384 kernel: sched_clock: Marking stable (546002951, 109146461)->(707935562, -52786150)
Jan 29 16:25:32.876392 kernel: registered taskstats version 1
Jan 29 16:25:32.876399 kernel: Loading compiled-in X.509 certificates
Jan 29 16:25:32.876407 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340'
Jan 29 16:25:32.876415 kernel: Key type .fscrypt registered
Jan 29 16:25:32.876422 kernel: Key type fscrypt-provisioning registered
Jan 29 16:25:32.876433 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 29 16:25:32.876440 kernel: ima: Allocated hash algorithm: sha1
Jan 29 16:25:32.876448 kernel: ima: No architecture policies found
Jan 29 16:25:32.876455 kernel: clk: Disabling unused clocks
Jan 29 16:25:32.876463 kernel: Freeing unused kernel image (initmem) memory: 43472K
Jan 29 16:25:32.876471 kernel: Write protecting the kernel read-only data: 38912k
Jan 29 16:25:32.876478 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K
Jan 29 16:25:32.876486 kernel: Run /init as init process
Jan 29 16:25:32.876496 kernel:   with arguments:
Jan 29 16:25:32.876504 kernel:     /init
Jan 29 16:25:32.876511 kernel:   with environment:
Jan 29 16:25:32.876519 kernel:     HOME=/
Jan 29 16:25:32.876526 kernel:     TERM=linux
Jan 29 16:25:32.876534 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Jan 29 16:25:32.876542 systemd[1]: Successfully made /usr/ read-only.
Jan 29 16:25:32.876553 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE)
Jan 29 16:25:32.876564 systemd[1]: Detected virtualization kvm.
Jan 29 16:25:32.876572 systemd[1]: Detected architecture x86-64.
Jan 29 16:25:32.876580 systemd[1]: Running in initrd.
Jan 29 16:25:32.876588 systemd[1]: No hostname configured, using default hostname.
Jan 29 16:25:32.876596 systemd[1]: Hostname set to <localhost>.
Jan 29 16:25:32.876604 systemd[1]: Initializing machine ID from VM UUID.
Jan 29 16:25:32.876613 systemd[1]: Queued start job for default target initrd.target.
Jan 29 16:25:32.876621 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 29 16:25:32.876632 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 29 16:25:32.876651 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Jan 29 16:25:32.876662 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 29 16:25:32.876671 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Jan 29 16:25:32.876680 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Jan 29 16:25:32.876692 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Jan 29 16:25:32.876700 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Jan 29 16:25:32.876709 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 29 16:25:32.876717 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 29 16:25:32.876726 systemd[1]: Reached target paths.target - Path Units.
Jan 29 16:25:32.876734 systemd[1]: Reached target slices.target - Slice Units.
Jan 29 16:25:32.876743 systemd[1]: Reached target swap.target - Swaps.
Jan 29 16:25:32.876751 systemd[1]: Reached target timers.target - Timer Units.
Jan 29 16:25:32.876762 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Jan 29 16:25:32.876770 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 29 16:25:32.876779 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Jan 29 16:25:32.876796 systemd[1]: Listening on systemd-journald.socket - Journal Sockets.
Jan 29 16:25:32.876805 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 29 16:25:32.876813 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 29 16:25:32.876822 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 29 16:25:32.876830 systemd[1]: Reached target sockets.target - Socket Units.
Jan 29 16:25:32.876838 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Jan 29 16:25:32.876850 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 29 16:25:32.876858 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Jan 29 16:25:32.876866 systemd[1]: Starting systemd-fsck-usr.service...
Jan 29 16:25:32.876875 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 29 16:25:32.876884 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 29 16:25:32.876892 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 29 16:25:32.876901 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Jan 29 16:25:32.876909 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 29 16:25:32.876921 systemd[1]: Finished systemd-fsck-usr.service.
Jan 29 16:25:32.876949 systemd-journald[193]: Collecting audit messages is disabled.
Jan 29 16:25:32.876973 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Jan 29 16:25:32.876982 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 29 16:25:32.876991 systemd-journald[193]: Journal started
Jan 29 16:25:32.877012 systemd-journald[193]: Runtime Journal (/run/log/journal/59a4e3d2b7404b84bdf4754b502ab6ec) is 6M, max 48.4M, 42.3M free.
Jan 29 16:25:32.869364 systemd-modules-load[195]: Inserted module 'overlay'
Jan 29 16:25:32.911286 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 29 16:25:32.911322 kernel: Bridge firewalling registered
Jan 29 16:25:32.895808 systemd-modules-load[195]: Inserted module 'br_netfilter'
Jan 29 16:25:32.913923 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 29 16:25:32.914338 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 29 16:25:32.916595 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 29 16:25:32.935439 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 29 16:25:32.938602 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 29 16:25:32.941679 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 29 16:25:32.945723 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 29 16:25:32.952242 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 29 16:25:32.954145 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 29 16:25:32.956737 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 29 16:25:32.960617 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Jan 29 16:25:32.968018 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 29 16:25:32.975441 dracut-cmdline[229]: dracut-dracut-053
Jan 29 16:25:32.982233 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d
Jan 29 16:25:32.980450 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 29 16:25:33.028172 systemd-resolved[237]: Positive Trust Anchors:
Jan 29 16:25:33.028191 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 29 16:25:33.028235 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 29 16:25:33.041534 systemd-resolved[237]: Defaulting to hostname 'linux'.
Jan 29 16:25:33.043794 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 29 16:25:33.044085 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 29 16:25:33.058321 kernel: SCSI subsystem initialized
Jan 29 16:25:33.067312 kernel: Loading iSCSI transport class v2.0-870.
Jan 29 16:25:33.078322 kernel: iscsi: registered transport (tcp)
Jan 29 16:25:33.105749 kernel: iscsi: registered transport (qla4xxx)
Jan 29 16:25:33.105789 kernel: QLogic iSCSI HBA Driver
Jan 29 16:25:33.149029 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Jan 29 16:25:33.157473 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Jan 29 16:25:33.181224 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 29 16:25:33.181263 kernel: device-mapper: uevent: version 1.0.3
Jan 29 16:25:33.181283 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Jan 29 16:25:33.221319 kernel: raid6: avx2x4   gen() 30109 MB/s
Jan 29 16:25:33.238315 kernel: raid6: avx2x2   gen() 30732 MB/s
Jan 29 16:25:33.255388 kernel: raid6: avx2x1   gen() 25842 MB/s
Jan 29 16:25:33.255407 kernel: raid6: using algorithm avx2x2 gen() 30732 MB/s
Jan 29 16:25:33.273408 kernel: raid6: .... xor() 19803 MB/s, rmw enabled
Jan 29 16:25:33.273434 kernel: raid6: using avx2x2 recovery algorithm
Jan 29 16:25:33.294315 kernel: xor: automatically using best checksumming function   avx       
Jan 29 16:25:33.438325 kernel: Btrfs loaded, zoned=no, fsverity=no
Jan 29 16:25:33.449898 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Jan 29 16:25:33.462473 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 29 16:25:33.479319 systemd-udevd[416]: Using default interface naming scheme 'v255'.
Jan 29 16:25:33.485044 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 29 16:25:33.494469 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Jan 29 16:25:33.506815 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation
Jan 29 16:25:33.540022 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 29 16:25:33.551486 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 29 16:25:33.616452 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 29 16:25:33.627523 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Jan 29 16:25:33.640422 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Jan 29 16:25:33.643747 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 29 16:25:33.646639 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 29 16:25:33.649197 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 29 16:25:33.656345 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues
Jan 29 16:25:33.668099 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB)
Jan 29 16:25:33.668261 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Jan 29 16:25:33.668274 kernel: GPT:9289727 != 19775487
Jan 29 16:25:33.668284 kernel: GPT:Alternate GPT header not at the end of the disk.
Jan 29 16:25:33.668316 kernel: GPT:9289727 != 19775487
Jan 29 16:25:33.668332 kernel: GPT: Use GNU Parted to correct GPT errors.
Jan 29 16:25:33.668343 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Jan 29 16:25:33.668357 kernel: cryptd: max_cpu_qlen set to 1000
Jan 29 16:25:33.656450 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Jan 29 16:25:33.674083 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Jan 29 16:25:33.679316 kernel: libata version 3.00 loaded.
Jan 29 16:25:33.686659 kernel: ahci 0000:00:1f.2: version 3.0
Jan 29 16:25:33.713855 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16
Jan 29 16:25:33.713878 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode
Jan 29 16:25:33.714044 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only 
Jan 29 16:25:33.714192 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 29 16:25:33.714203 kernel: scsi host0: ahci
Jan 29 16:25:33.714441 kernel: AES CTR mode by8 optimization enabled
Jan 29 16:25:33.714453 kernel: scsi host1: ahci
Jan 29 16:25:33.714611 kernel: scsi host2: ahci
Jan 29 16:25:33.714772 kernel: scsi host3: ahci
Jan 29 16:25:33.714920 kernel: scsi host4: ahci
Jan 29 16:25:33.715065 kernel: scsi host5: ahci
Jan 29 16:25:33.715211 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34
Jan 29 16:25:33.715223 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34
Jan 29 16:25:33.715237 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34
Jan 29 16:25:33.715248 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34
Jan 29 16:25:33.715258 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (461)
Jan 29 16:25:33.715268 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34
Jan 29 16:25:33.715278 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34
Jan 29 16:25:33.705818 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 29 16:25:33.705936 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 29 16:25:33.721322 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (473)
Jan 29 16:25:33.716133 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 29 16:25:33.721318 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 29 16:25:33.721473 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 29 16:25:33.724522 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Jan 29 16:25:33.732538 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 29 16:25:33.747985 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM.
Jan 29 16:25:33.787041 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 29 16:25:33.803476 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT.
Jan 29 16:25:33.811111 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A.
Jan 29 16:25:33.811582 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132.
Jan 29 16:25:33.820976 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Jan 29 16:25:33.838453 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Jan 29 16:25:33.839395 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 29 16:25:33.852241 disk-uuid[557]: Primary Header is updated.
Jan 29 16:25:33.852241 disk-uuid[557]: Secondary Entries is updated.
Jan 29 16:25:33.852241 disk-uuid[557]: Secondary Header is updated.
Jan 29 16:25:33.855748 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Jan 29 16:25:33.860655 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Jan 29 16:25:33.861775 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 29 16:25:34.021344 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Jan 29 16:25:34.021438 kernel: ata1: SATA link down (SStatus 0 SControl 300)
Jan 29 16:25:34.022315 kernel: ata2: SATA link down (SStatus 0 SControl 300)
Jan 29 16:25:34.022342 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 29 16:25:34.023447 kernel: ata3.00: applying bridge limits
Jan 29 16:25:34.024321 kernel: ata3.00: configured for UDMA/100
Jan 29 16:25:34.026329 kernel: scsi 2:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 29 16:25:34.029318 kernel: ata5: SATA link down (SStatus 0 SControl 300)
Jan 29 16:25:34.030318 kernel: ata6: SATA link down (SStatus 0 SControl 300)
Jan 29 16:25:34.030333 kernel: ata4: SATA link down (SStatus 0 SControl 300)
Jan 29 16:25:34.079329 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 29 16:25:34.093051 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 29 16:25:34.093066 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0
Jan 29 16:25:34.861008 disk-uuid[561]: The operation has completed successfully.
Jan 29 16:25:34.862219 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Jan 29 16:25:34.898578 systemd[1]: disk-uuid.service: Deactivated successfully.
Jan 29 16:25:34.898701 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Jan 29 16:25:34.945442 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Jan 29 16:25:34.948661 sh[594]: Success
Jan 29 16:25:34.961334 kernel: device-mapper: verity: sha256 using implementation "sha256-ni"
Jan 29 16:25:34.999394 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Jan 29 16:25:35.014094 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Jan 29 16:25:35.016806 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Jan 29 16:25:35.029250 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3
Jan 29 16:25:35.029316 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm
Jan 29 16:25:35.029329 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Jan 29 16:25:35.029343 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Jan 29 16:25:35.029997 kernel: BTRFS info (device dm-0): using free space tree
Jan 29 16:25:35.034692 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Jan 29 16:25:35.035544 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Jan 29 16:25:35.044425 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Jan 29 16:25:35.045387 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Jan 29 16:25:35.063419 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e
Jan 29 16:25:35.063486 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Jan 29 16:25:35.063497 kernel: BTRFS info (device vda6): using free space tree
Jan 29 16:25:35.067325 kernel: BTRFS info (device vda6): auto enabling async discard
Jan 29 16:25:35.079825 systemd[1]: mnt-oem.mount: Deactivated successfully.
Jan 29 16:25:35.081070 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e
Jan 29 16:25:35.091399 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Jan 29 16:25:35.101517 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Jan 29 16:25:35.159025 ignition[702]: Ignition 2.20.0
Jan 29 16:25:35.159046 ignition[702]: Stage: fetch-offline
Jan 29 16:25:35.159096 ignition[702]: no configs at "/usr/lib/ignition/base.d"
Jan 29 16:25:35.159110 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 29 16:25:35.159222 ignition[702]: parsed url from cmdline: ""
Jan 29 16:25:35.159227 ignition[702]: no config URL provided
Jan 29 16:25:35.159234 ignition[702]: reading system config file "/usr/lib/ignition/user.ign"
Jan 29 16:25:35.159245 ignition[702]: no config at "/usr/lib/ignition/user.ign"
Jan 29 16:25:35.159276 ignition[702]: op(1): [started]  loading QEMU firmware config module
Jan 29 16:25:35.159284 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg"
Jan 29 16:25:35.168360 ignition[702]: op(1): [finished] loading QEMU firmware config module
Jan 29 16:25:35.170260 ignition[702]: parsing config with SHA512: b78ba0fffde690cf60b19db2524ba6fca26ec8262e6bcd03a211c0a2ece00ff9642364b8b0a6f7dd9bb1e0b8227a756b0d7c9e106faa0f6414ae5caae3f67e2c
Jan 29 16:25:35.173020 unknown[702]: fetched base config from "system"
Jan 29 16:25:35.173035 unknown[702]: fetched user config from "qemu"
Jan 29 16:25:35.173349 ignition[702]: fetch-offline: fetch-offline passed
Jan 29 16:25:35.175984 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 29 16:25:35.173428 ignition[702]: Ignition finished successfully
Jan 29 16:25:35.181626 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 29 16:25:35.194546 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 29 16:25:35.220906 systemd-networkd[786]: lo: Link UP
Jan 29 16:25:35.220917 systemd-networkd[786]: lo: Gained carrier
Jan 29 16:25:35.223044 systemd-networkd[786]: Enumeration completed
Jan 29 16:25:35.223242 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 29 16:25:35.223492 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 29 16:25:35.223498 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 29 16:25:35.224425 systemd-networkd[786]: eth0: Link UP
Jan 29 16:25:35.224429 systemd-networkd[786]: eth0: Gained carrier
Jan 29 16:25:35.224437 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 29 16:25:35.225613 systemd[1]: Reached target network.target - Network.
Jan 29 16:25:35.227567 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Jan 29 16:25:35.235493 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Jan 29 16:25:35.240363 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.148/16, gateway 10.0.0.1 acquired from 10.0.0.1
Jan 29 16:25:35.253602 ignition[790]: Ignition 2.20.0
Jan 29 16:25:35.253616 ignition[790]: Stage: kargs
Jan 29 16:25:35.253823 ignition[790]: no configs at "/usr/lib/ignition/base.d"
Jan 29 16:25:35.253837 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 29 16:25:35.257891 ignition[790]: kargs: kargs passed
Jan 29 16:25:35.257959 ignition[790]: Ignition finished successfully
Jan 29 16:25:35.262869 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Jan 29 16:25:35.270598 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Jan 29 16:25:35.282575 ignition[799]: Ignition 2.20.0
Jan 29 16:25:35.282589 ignition[799]: Stage: disks
Jan 29 16:25:35.282794 ignition[799]: no configs at "/usr/lib/ignition/base.d"
Jan 29 16:25:35.282812 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 29 16:25:35.283683 ignition[799]: disks: disks passed
Jan 29 16:25:35.286203 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Jan 29 16:25:35.283751 ignition[799]: Ignition finished successfully
Jan 29 16:25:35.287703 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Jan 29 16:25:35.289219 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Jan 29 16:25:35.291403 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 29 16:25:35.292431 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 29 16:25:35.293475 systemd[1]: Reached target basic.target - Basic System.
Jan 29 16:25:35.304483 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Jan 29 16:25:35.318141 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Jan 29 16:25:35.324293 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Jan 29 16:25:36.037366 systemd[1]: Mounting sysroot.mount - /sysroot...
Jan 29 16:25:36.120320 kernel: EXT4-fs (vda9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none.
Jan 29 16:25:36.121380 systemd[1]: Mounted sysroot.mount - /sysroot.
Jan 29 16:25:36.122135 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Jan 29 16:25:36.128416 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 29 16:25:36.130568 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Jan 29 16:25:36.131325 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Jan 29 16:25:36.131371 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Jan 29 16:25:36.131395 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 29 16:25:36.141787 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (818)
Jan 29 16:25:36.144049 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e
Jan 29 16:25:36.144067 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Jan 29 16:25:36.144079 kernel: BTRFS info (device vda6): using free space tree
Jan 29 16:25:36.148321 kernel: BTRFS info (device vda6): auto enabling async discard
Jan 29 16:25:36.149459 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 29 16:25:36.150000 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Jan 29 16:25:36.153288 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Jan 29 16:25:36.186877 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory
Jan 29 16:25:36.190643 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory
Jan 29 16:25:36.194905 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory
Jan 29 16:25:36.199200 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory
Jan 29 16:25:36.289856 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Jan 29 16:25:36.301408 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Jan 29 16:25:36.302668 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Jan 29 16:25:36.313359 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e
Jan 29 16:25:36.328990 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Jan 29 16:25:36.336727 ignition[933]: INFO     : Ignition 2.20.0
Jan 29 16:25:36.336727 ignition[933]: INFO     : Stage: mount
Jan 29 16:25:36.338389 ignition[933]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 29 16:25:36.338389 ignition[933]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 29 16:25:36.341003 ignition[933]: INFO     : mount: mount passed
Jan 29 16:25:36.341774 ignition[933]: INFO     : Ignition finished successfully
Jan 29 16:25:36.344734 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Jan 29 16:25:36.357425 systemd[1]: Starting ignition-files.service - Ignition (files)...
Jan 29 16:25:37.027727 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Jan 29 16:25:37.043509 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 29 16:25:37.050320 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (946)
Jan 29 16:25:37.054879 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e
Jan 29 16:25:37.054903 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Jan 29 16:25:37.054922 kernel: BTRFS info (device vda6): using free space tree
Jan 29 16:25:37.057487 kernel: BTRFS info (device vda6): auto enabling async discard
Jan 29 16:25:37.058728 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 29 16:25:37.082848 ignition[963]: INFO     : Ignition 2.20.0
Jan 29 16:25:37.082848 ignition[963]: INFO     : Stage: files
Jan 29 16:25:37.084845 ignition[963]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 29 16:25:37.084845 ignition[963]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 29 16:25:37.084845 ignition[963]: DEBUG    : files: compiled without relabeling support, skipping
Jan 29 16:25:37.084845 ignition[963]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Jan 29 16:25:37.084845 ignition[963]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Jan 29 16:25:37.091442 ignition[963]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Jan 29 16:25:37.091442 ignition[963]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Jan 29 16:25:37.091442 ignition[963]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Jan 29 16:25:37.091442 ignition[963]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/home/core/install.sh"
Jan 29 16:25:37.091442 ignition[963]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh"
Jan 29 16:25:37.091442 ignition[963]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Jan 29 16:25:37.091442 ignition[963]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Jan 29 16:25:37.091442 ignition[963]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw"
Jan 29 16:25:37.091442 ignition[963]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw"
Jan 29 16:25:37.091442 ignition[963]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw"
Jan 29 16:25:37.091442 ignition[963]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1
Jan 29 16:25:37.087999 unknown[963]: wrote ssh authorized keys file for user: core
Jan 29 16:25:37.150444 systemd-networkd[786]: eth0: Gained IPv6LL
Jan 29 16:25:37.513411 ignition[963]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET result: OK
Jan 29 16:25:37.793507 ignition[963]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw"
Jan 29 16:25:37.793507 ignition[963]: INFO     : files: op(7): [started]  processing unit "coreos-metadata.service"
Jan 29 16:25:37.797493 ignition[963]: INFO     : files: op(7): op(8): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Jan 29 16:25:37.797493 ignition[963]: INFO     : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Jan 29 16:25:37.797493 ignition[963]: INFO     : files: op(7): [finished] processing unit "coreos-metadata.service"
Jan 29 16:25:37.797493 ignition[963]: INFO     : files: op(9): [started]  setting preset to disabled for "coreos-metadata.service"
Jan 29 16:25:37.811278 ignition[963]: INFO     : files: op(9): op(a): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Jan 29 16:25:37.815224 ignition[963]: INFO     : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Jan 29 16:25:37.816884 ignition[963]: INFO     : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service"
Jan 29 16:25:37.816884 ignition[963]: INFO     : files: createResultFile: createFiles: op(b): [started]  writing file "/sysroot/etc/.ignition-result.json"
Jan 29 16:25:37.816884 ignition[963]: INFO     : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json"
Jan 29 16:25:37.816884 ignition[963]: INFO     : files: files passed
Jan 29 16:25:37.816884 ignition[963]: INFO     : Ignition finished successfully
Jan 29 16:25:37.818280 systemd[1]: Finished ignition-files.service - Ignition (files).
Jan 29 16:25:37.824406 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Jan 29 16:25:37.826406 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Jan 29 16:25:37.828527 systemd[1]: ignition-quench.service: Deactivated successfully.
Jan 29 16:25:37.828635 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Jan 29 16:25:37.836308 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory
Jan 29 16:25:37.838896 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 29 16:25:37.840633 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Jan 29 16:25:37.842402 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 29 16:25:37.843745 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 29 16:25:37.846937 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Jan 29 16:25:37.853463 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Jan 29 16:25:37.888411 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 29 16:25:37.888536 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Jan 29 16:25:37.889227 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Jan 29 16:25:37.893856 systemd[1]: Reached target initrd.target - Initrd Default Target.
Jan 29 16:25:37.894202 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Jan 29 16:25:37.897429 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Jan 29 16:25:37.916101 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 29 16:25:37.926548 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Jan 29 16:25:37.937637 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Jan 29 16:25:37.938142 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 29 16:25:37.940315 systemd[1]: Stopped target timers.target - Timer Units.
Jan 29 16:25:37.942654 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 29 16:25:37.942773 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 29 16:25:37.945796 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Jan 29 16:25:37.946349 systemd[1]: Stopped target basic.target - Basic System.
Jan 29 16:25:37.949072 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Jan 29 16:25:37.949560 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 29 16:25:37.952710 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Jan 29 16:25:37.954866 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Jan 29 16:25:37.956941 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 29 16:25:37.958858 systemd[1]: Stopped target sysinit.target - System Initialization.
Jan 29 16:25:37.961174 systemd[1]: Stopped target local-fs.target - Local File Systems.
Jan 29 16:25:37.963083 systemd[1]: Stopped target swap.target - Swaps.
Jan 29 16:25:37.964945 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 29 16:25:37.965058 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Jan 29 16:25:37.968138 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Jan 29 16:25:37.970211 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 29 16:25:37.980590 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Jan 29 16:25:37.983070 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 29 16:25:37.985270 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 29 16:25:37.985436 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Jan 29 16:25:37.986205 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Jan 29 16:25:37.986355 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 29 16:25:37.986829 systemd[1]: Stopped target paths.target - Path Units.
Jan 29 16:25:37.991932 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 29 16:25:37.997377 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 29 16:25:37.997969 systemd[1]: Stopped target slices.target - Slice Units.
Jan 29 16:25:37.998343 systemd[1]: Stopped target sockets.target - Socket Units.
Jan 29 16:25:38.002890 systemd[1]: iscsid.socket: Deactivated successfully.
Jan 29 16:25:38.003007 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Jan 29 16:25:38.003709 systemd[1]: iscsiuio.socket: Deactivated successfully.
Jan 29 16:25:38.003830 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 29 16:25:38.005996 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Jan 29 16:25:38.006152 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 29 16:25:38.007894 systemd[1]: ignition-files.service: Deactivated successfully.
Jan 29 16:25:38.008029 systemd[1]: Stopped ignition-files.service - Ignition (files).
Jan 29 16:25:38.024458 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Jan 29 16:25:38.025773 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Jan 29 16:25:38.026834 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 29 16:25:38.026960 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 29 16:25:38.027281 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 29 16:25:38.027414 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 29 16:25:38.038540 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 29 16:25:38.038681 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Jan 29 16:25:38.050934 ignition[1017]: INFO     : Ignition 2.20.0
Jan 29 16:25:38.050934 ignition[1017]: INFO     : Stage: umount
Jan 29 16:25:38.052813 ignition[1017]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 29 16:25:38.052813 ignition[1017]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 29 16:25:38.055419 ignition[1017]: INFO     : umount: umount passed
Jan 29 16:25:38.056248 ignition[1017]: INFO     : Ignition finished successfully
Jan 29 16:25:38.057003 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Jan 29 16:25:38.060413 systemd[1]: ignition-mount.service: Deactivated successfully.
Jan 29 16:25:38.060579 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Jan 29 16:25:38.061366 systemd[1]: Stopped target network.target - Network.
Jan 29 16:25:38.064002 systemd[1]: ignition-disks.service: Deactivated successfully.
Jan 29 16:25:38.064060 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Jan 29 16:25:38.064550 systemd[1]: ignition-kargs.service: Deactivated successfully.
Jan 29 16:25:38.064595 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Jan 29 16:25:38.064937 systemd[1]: ignition-setup.service: Deactivated successfully.
Jan 29 16:25:38.064993 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Jan 29 16:25:38.065540 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Jan 29 16:25:38.065595 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Jan 29 16:25:38.066027 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Jan 29 16:25:38.073925 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Jan 29 16:25:38.083656 systemd[1]: systemd-resolved.service: Deactivated successfully.
Jan 29 16:25:38.083798 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Jan 29 16:25:38.088278 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully.
Jan 29 16:25:38.088576 systemd[1]: systemd-networkd.service: Deactivated successfully.
Jan 29 16:25:38.088730 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Jan 29 16:25:38.093478 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully.
Jan 29 16:25:38.095533 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Jan 29 16:25:38.096538 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Jan 29 16:25:38.115379 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Jan 29 16:25:38.116324 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Jan 29 16:25:38.117371 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 29 16:25:38.119763 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 29 16:25:38.119812 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Jan 29 16:25:38.122217 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 29 16:25:38.123135 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Jan 29 16:25:38.125251 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 29 16:25:38.126241 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 29 16:25:38.129556 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 29 16:25:38.134374 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 29 16:25:38.134446 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 29 16:25:38.146308 systemd[1]: network-cleanup.service: Deactivated successfully.
Jan 29 16:25:38.146450 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Jan 29 16:25:38.147124 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 29 16:25:38.147276 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 29 16:25:38.150599 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 29 16:25:38.150670 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Jan 29 16:25:38.151668 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 29 16:25:38.151708 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 29 16:25:38.151964 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 29 16:25:38.152010 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Jan 29 16:25:38.157107 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 29 16:25:38.157157 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Jan 29 16:25:38.159937 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 29 16:25:38.159990 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 29 16:25:38.161772 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Jan 29 16:25:38.164139 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 29 16:25:38.164204 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 29 16:25:38.167726 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully.
Jan 29 16:25:38.167788 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 29 16:25:38.168278 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 29 16:25:38.168380 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 29 16:25:38.168776 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 29 16:25:38.168834 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 29 16:25:38.175358 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 29 16:25:38.175422 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully.
Jan 29 16:25:38.175828 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 29 16:25:38.175928 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Jan 29 16:25:38.210599 systemd[1]: sysroot-boot.service: Deactivated successfully.
Jan 29 16:25:38.210787 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Jan 29 16:25:38.212095 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Jan 29 16:25:38.215182 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Jan 29 16:25:38.215282 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Jan 29 16:25:38.225554 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Jan 29 16:25:38.232338 systemd[1]: Switching root.
Jan 29 16:25:38.264932 systemd-journald[193]: Journal stopped
Jan 29 16:25:39.394660 systemd-journald[193]: Received SIGTERM from PID 1 (systemd).
Jan 29 16:25:39.394722 kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 16:25:39.394746 kernel: SELinux:  policy capability open_perms=1
Jan 29 16:25:39.394761 kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 16:25:39.394773 kernel: SELinux:  policy capability always_check_network=0
Jan 29 16:25:39.394784 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 16:25:39.394795 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 16:25:39.394811 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Jan 29 16:25:39.394828 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Jan 29 16:25:39.394840 kernel: audit: type=1403 audit(1738167938.601:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 29 16:25:39.394854 systemd[1]: Successfully loaded SELinux policy in 40.613ms.
Jan 29 16:25:39.394873 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.116ms.
Jan 29 16:25:39.394890 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE)
Jan 29 16:25:39.394902 systemd[1]: Detected virtualization kvm.
Jan 29 16:25:39.394914 systemd[1]: Detected architecture x86-64.
Jan 29 16:25:39.394926 systemd[1]: Detected first boot.
Jan 29 16:25:39.394938 systemd[1]: Initializing machine ID from VM UUID.
Jan 29 16:25:39.394953 zram_generator::config[1063]: No configuration found.
Jan 29 16:25:39.394965 kernel: Guest personality initialized and is inactive
Jan 29 16:25:39.394977 kernel: VMCI host device registered (name=vmci, major=10, minor=125)
Jan 29 16:25:39.394988 kernel: Initialized host personality
Jan 29 16:25:39.395000 kernel: NET: Registered PF_VSOCK protocol family
Jan 29 16:25:39.395011 systemd[1]: Populated /etc with preset unit settings.
Jan 29 16:25:39.395024 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully.
Jan 29 16:25:39.395037 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 29 16:25:39.395054 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Jan 29 16:25:39.395070 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 29 16:25:39.395082 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Jan 29 16:25:39.395094 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Jan 29 16:25:39.395106 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Jan 29 16:25:39.395118 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Jan 29 16:25:39.395131 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Jan 29 16:25:39.395143 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Jan 29 16:25:39.395156 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Jan 29 16:25:39.395170 systemd[1]: Created slice user.slice - User and Session Slice.
Jan 29 16:25:39.395182 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 29 16:25:39.395195 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 29 16:25:39.395207 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Jan 29 16:25:39.395219 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Jan 29 16:25:39.395232 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Jan 29 16:25:39.395246 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 29 16:25:39.395260 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0...
Jan 29 16:25:39.395281 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 29 16:25:39.395391 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Jan 29 16:25:39.395406 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Jan 29 16:25:39.395418 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Jan 29 16:25:39.395430 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Jan 29 16:25:39.395442 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 29 16:25:39.395454 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 29 16:25:39.395466 systemd[1]: Reached target slices.target - Slice Units.
Jan 29 16:25:39.395478 systemd[1]: Reached target swap.target - Swaps.
Jan 29 16:25:39.395493 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Jan 29 16:25:39.395505 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Jan 29 16:25:39.395517 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption.
Jan 29 16:25:39.395530 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 29 16:25:39.395545 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 29 16:25:39.395561 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 29 16:25:39.395576 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Jan 29 16:25:39.395590 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Jan 29 16:25:39.395601 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Jan 29 16:25:39.395630 systemd[1]: Mounting media.mount - External Media Directory...
Jan 29 16:25:39.395643 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 29 16:25:39.395655 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Jan 29 16:25:39.395666 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Jan 29 16:25:39.395678 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Jan 29 16:25:39.395690 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 29 16:25:39.395703 systemd[1]: Reached target machines.target - Containers.
Jan 29 16:25:39.395719 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Jan 29 16:25:39.395739 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 29 16:25:39.395757 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 29 16:25:39.395773 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Jan 29 16:25:39.395785 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 29 16:25:39.395797 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jan 29 16:25:39.395811 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 29 16:25:39.395826 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Jan 29 16:25:39.395843 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 29 16:25:39.395860 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Jan 29 16:25:39.395879 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 29 16:25:39.395892 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Jan 29 16:25:39.395906 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Jan 29 16:25:39.395918 systemd[1]: Stopped systemd-fsck-usr.service.
Jan 29 16:25:39.395930 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67).
Jan 29 16:25:39.395943 kernel: fuse: init (API version 7.39)
Jan 29 16:25:39.395954 kernel: loop: module loaded
Jan 29 16:25:39.395966 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 29 16:25:39.395980 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 29 16:25:39.395992 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Jan 29 16:25:39.396004 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Jan 29 16:25:39.396016 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials...
Jan 29 16:25:39.396028 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 29 16:25:39.396040 systemd[1]: verity-setup.service: Deactivated successfully.
Jan 29 16:25:39.396052 systemd[1]: Stopped verity-setup.service.
Jan 29 16:25:39.396067 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 29 16:25:39.396078 kernel: ACPI: bus type drm_connector registered
Jan 29 16:25:39.396090 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Jan 29 16:25:39.396102 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Jan 29 16:25:39.396114 systemd[1]: Mounted media.mount - External Media Directory.
Jan 29 16:25:39.396144 systemd-journald[1140]: Collecting audit messages is disabled.
Jan 29 16:25:39.396169 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Jan 29 16:25:39.396181 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Jan 29 16:25:39.396194 systemd-journald[1140]: Journal started
Jan 29 16:25:39.396216 systemd-journald[1140]: Runtime Journal (/run/log/journal/59a4e3d2b7404b84bdf4754b502ab6ec) is 6M, max 48.4M, 42.3M free.
Jan 29 16:25:39.170044 systemd[1]: Queued start job for default target multi-user.target.
Jan 29 16:25:39.183142 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6.
Jan 29 16:25:39.183643 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 29 16:25:39.399318 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 29 16:25:39.400375 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Jan 29 16:25:39.401677 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Jan 29 16:25:39.403350 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 29 16:25:39.405067 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 29 16:25:39.405333 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Jan 29 16:25:39.406856 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 29 16:25:39.407089 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 29 16:25:39.408597 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 29 16:25:39.408828 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jan 29 16:25:39.410417 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 29 16:25:39.410669 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 29 16:25:39.412240 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 29 16:25:39.412530 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Jan 29 16:25:39.413931 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 29 16:25:39.414153 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 29 16:25:39.415639 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 29 16:25:39.417096 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Jan 29 16:25:39.418938 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Jan 29 16:25:39.420700 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials.
Jan 29 16:25:39.435767 systemd[1]: Reached target network-pre.target - Preparation for Network.
Jan 29 16:25:39.444383 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Jan 29 16:25:39.446680 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Jan 29 16:25:39.447837 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Jan 29 16:25:39.447866 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 29 16:25:39.449884 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management.
Jan 29 16:25:39.452197 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Jan 29 16:25:39.454459 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Jan 29 16:25:39.455691 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 29 16:25:39.458512 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Jan 29 16:25:39.463613 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Jan 29 16:25:39.464916 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 29 16:25:39.469429 systemd-journald[1140]: Time spent on flushing to /var/log/journal/59a4e3d2b7404b84bdf4754b502ab6ec is 22.392ms for 946 entries.
Jan 29 16:25:39.469429 systemd-journald[1140]: System Journal (/var/log/journal/59a4e3d2b7404b84bdf4754b502ab6ec) is 8M, max 195.6M, 187.6M free.
Jan 29 16:25:39.513461 systemd-journald[1140]: Received client request to flush runtime journal.
Jan 29 16:25:39.513520 kernel: loop0: detected capacity change from 0 to 138176
Jan 29 16:25:39.470147 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Jan 29 16:25:39.472590 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 29 16:25:39.474786 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 29 16:25:39.479126 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Jan 29 16:25:39.481729 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Jan 29 16:25:39.484809 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Jan 29 16:25:39.486310 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Jan 29 16:25:39.487981 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Jan 29 16:25:39.492543 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Jan 29 16:25:39.494233 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Jan 29 16:25:39.503532 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk...
Jan 29 16:25:39.505160 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 29 16:25:39.509582 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Jan 29 16:25:39.516689 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Jan 29 16:25:39.527914 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 29 16:25:39.530000 udevadm[1193]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Jan 29 16:25:39.530402 systemd-tmpfiles[1184]: ACLs are not supported, ignoring.
Jan 29 16:25:39.530421 systemd-tmpfiles[1184]: ACLs are not supported, ignoring.
Jan 29 16:25:39.537215 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 29 16:25:39.541341 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Jan 29 16:25:39.546505 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Jan 29 16:25:39.547999 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk.
Jan 29 16:25:39.563331 kernel: loop1: detected capacity change from 0 to 218376
Jan 29 16:25:39.577078 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Jan 29 16:25:39.585478 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 29 16:25:39.600263 kernel: loop2: detected capacity change from 0 to 147912
Jan 29 16:25:39.604628 systemd-tmpfiles[1207]: ACLs are not supported, ignoring.
Jan 29 16:25:39.604655 systemd-tmpfiles[1207]: ACLs are not supported, ignoring.
Jan 29 16:25:39.611406 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 29 16:25:39.638335 kernel: loop3: detected capacity change from 0 to 138176
Jan 29 16:25:39.650317 kernel: loop4: detected capacity change from 0 to 218376
Jan 29 16:25:39.658315 kernel: loop5: detected capacity change from 0 to 147912
Jan 29 16:25:39.668764 (sd-merge)[1211]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'.
Jan 29 16:25:39.669364 (sd-merge)[1211]: Merged extensions into '/usr'.
Jan 29 16:25:39.676043 systemd[1]: Reload requested from client PID 1183 ('systemd-sysext') (unit systemd-sysext.service)...
Jan 29 16:25:39.676136 systemd[1]: Reloading...
Jan 29 16:25:39.744319 zram_generator::config[1242]: No configuration found.
Jan 29 16:25:39.801545 ldconfig[1178]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Jan 29 16:25:39.893764 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 29 16:25:39.961045 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Jan 29 16:25:39.961247 systemd[1]: Reloading finished in 284 ms.
Jan 29 16:25:39.981980 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Jan 29 16:25:39.983906 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Jan 29 16:25:40.008834 systemd[1]: Starting ensure-sysext.service...
Jan 29 16:25:40.010778 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 29 16:25:40.100164 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Jan 29 16:25:40.100476 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Jan 29 16:25:40.101445 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Jan 29 16:25:40.101739 systemd-tmpfiles[1277]: ACLs are not supported, ignoring.
Jan 29 16:25:40.101822 systemd-tmpfiles[1277]: ACLs are not supported, ignoring.
Jan 29 16:25:40.105412 systemd[1]: Reload requested from client PID 1276 ('systemctl') (unit ensure-sysext.service)...
Jan 29 16:25:40.105426 systemd[1]: Reloading...
Jan 29 16:25:40.105691 systemd-tmpfiles[1277]: Detected autofs mount point /boot during canonicalization of boot.
Jan 29 16:25:40.105704 systemd-tmpfiles[1277]: Skipping /boot
Jan 29 16:25:40.118679 systemd-tmpfiles[1277]: Detected autofs mount point /boot during canonicalization of boot.
Jan 29 16:25:40.118693 systemd-tmpfiles[1277]: Skipping /boot
Jan 29 16:25:40.169337 zram_generator::config[1306]: No configuration found.
Jan 29 16:25:40.276974 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 29 16:25:40.345198 systemd[1]: Reloading finished in 239 ms.
Jan 29 16:25:40.358211 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Jan 29 16:25:40.376002 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 29 16:25:40.385210 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Jan 29 16:25:40.387536 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Jan 29 16:25:40.390048 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Jan 29 16:25:40.394019 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 29 16:25:40.397601 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 29 16:25:40.402566 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Jan 29 16:25:40.406878 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 29 16:25:40.407049 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 29 16:25:40.409531 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 29 16:25:40.412570 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 29 16:25:40.424754 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 29 16:25:40.426376 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 29 16:25:40.426483 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67).
Jan 29 16:25:40.429669 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Jan 29 16:25:40.432239 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 29 16:25:40.433882 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Jan 29 16:25:40.436708 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 29 16:25:40.437258 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 29 16:25:40.438136 systemd-udevd[1349]: Using default interface naming scheme 'v255'.
Jan 29 16:25:40.439993 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 29 16:25:40.440281 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 29 16:25:40.442522 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 29 16:25:40.442834 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 29 16:25:40.451788 augenrules[1375]: No rules
Jan 29 16:25:40.453647 systemd[1]: audit-rules.service: Deactivated successfully.
Jan 29 16:25:40.453915 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Jan 29 16:25:40.460192 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Jan 29 16:25:40.465079 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 29 16:25:40.465277 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 29 16:25:40.473563 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 29 16:25:40.477537 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 29 16:25:40.483568 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 29 16:25:40.485513 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 29 16:25:40.486462 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67).
Jan 29 16:25:40.490631 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Jan 29 16:25:40.492110 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 29 16:25:40.494360 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 29 16:25:40.497780 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Jan 29 16:25:40.537476 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Jan 29 16:25:40.539804 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 29 16:25:40.540023 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 29 16:25:40.545362 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1410)
Jan 29 16:25:40.542797 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 29 16:25:40.543007 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 29 16:25:40.546877 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 29 16:25:40.547085 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 29 16:25:40.548927 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Jan 29 16:25:40.570735 systemd[1]: Finished ensure-sysext.service.
Jan 29 16:25:40.573401 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped.
Jan 29 16:25:40.582114 systemd-resolved[1348]: Positive Trust Anchors:
Jan 29 16:25:40.582425 systemd-resolved[1348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 29 16:25:40.582502 systemd-resolved[1348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 29 16:25:40.587269 systemd-resolved[1348]: Defaulting to hostname 'linux'.
Jan 29 16:25:40.590317 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Jan 29 16:25:40.591725 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 29 16:25:40.596825 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 29 16:25:40.598145 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 29 16:25:40.601309 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
Jan 29 16:25:40.606717 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Jan 29 16:25:40.608313 kernel: ACPI: button: Power Button [PWRF]
Jan 29 16:25:40.608392 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 29 16:25:40.609621 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 29 16:25:40.614486 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jan 29 16:25:40.621050 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 29 16:25:40.624550 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 29 16:25:40.625923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 29 16:25:40.627413 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
Jan 29 16:25:40.628906 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 29 16:25:40.629095 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 29 16:25:40.629038 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Jan 29 16:25:40.629706 augenrules[1427]: /sbin/augenrules: No change
Jan 29 16:25:40.630657 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67).
Jan 29 16:25:40.632768 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 29 16:25:40.638488 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Jan 29 16:25:40.641410 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
Jan 29 16:25:40.641129 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Jan 29 16:25:40.641157 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jan 29 16:25:40.642062 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 29 16:25:40.642284 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 29 16:25:40.647169 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 29 16:25:40.647422 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jan 29 16:25:40.649703 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 29 16:25:40.649931 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 29 16:25:40.651683 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 29 16:25:40.651927 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 29 16:25:40.660775 augenrules[1451]: No rules
Jan 29 16:25:40.664813 systemd[1]: audit-rules.service: Deactivated successfully.
Jan 29 16:25:40.665797 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Jan 29 16:25:40.668801 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 29 16:25:40.668885 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 29 16:25:40.677898 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Jan 29 16:25:40.699323 kernel: mousedev: PS/2 mouse device common for all mice
Jan 29 16:25:40.742045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 29 16:25:40.766647 kernel: kvm_amd: TSC scaling supported
Jan 29 16:25:40.766705 kernel: kvm_amd: Nested Virtualization enabled
Jan 29 16:25:40.766739 kernel: kvm_amd: Nested Paging enabled
Jan 29 16:25:40.767621 kernel: kvm_amd: LBR virtualization supported
Jan 29 16:25:40.767665 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported
Jan 29 16:25:40.767701 kernel: kvm_amd: Virtual GIF supported
Jan 29 16:25:40.791322 kernel: EDAC MC: Ver: 3.0.0
Jan 29 16:25:40.810799 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Jan 29 16:25:40.811248 systemd[1]: Reached target time-set.target - System Time Set.
Jan 29 16:25:40.812165 systemd-networkd[1445]: lo: Link UP
Jan 29 16:25:40.812177 systemd-networkd[1445]: lo: Gained carrier
Jan 29 16:25:40.813979 systemd-networkd[1445]: Enumeration completed
Jan 29 16:25:40.814374 systemd-networkd[1445]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 29 16:25:40.814385 systemd-networkd[1445]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 29 16:25:40.815179 systemd-networkd[1445]: eth0: Link UP
Jan 29 16:25:40.815189 systemd-networkd[1445]: eth0: Gained carrier
Jan 29 16:25:40.815202 systemd-networkd[1445]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 29 16:25:40.837769 systemd-networkd[1445]: eth0: DHCPv4 address 10.0.0.148/16, gateway 10.0.0.1 acquired from 10.0.0.1
Jan 29 16:25:40.839365 systemd-timesyncd[1446]: Network configuration changed, trying to establish connection.
Jan 29 16:25:42.061271 systemd-resolved[1348]: Clock change detected. Flushing caches.
Jan 29 16:25:42.061326 systemd-timesyncd[1446]: Contacted time server 10.0.0.1:123 (10.0.0.1).
Jan 29 16:25:42.061398 systemd-timesyncd[1446]: Initial clock synchronization to Wed 2025-01-29 16:25:42.061201 UTC.
Jan 29 16:25:42.069432 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 29 16:25:42.071150 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Jan 29 16:25:42.072813 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 29 16:25:42.076193 systemd[1]: Reached target network.target - Network.
Jan 29 16:25:42.087072 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Jan 29 16:25:42.089467 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd...
Jan 29 16:25:42.091984 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Jan 29 16:25:42.100898 lvm[1476]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jan 29 16:25:42.108375 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd.
Jan 29 16:25:42.138372 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Jan 29 16:25:42.140061 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 29 16:25:42.141316 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 29 16:25:42.142595 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Jan 29 16:25:42.144014 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Jan 29 16:25:42.145674 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Jan 29 16:25:42.146961 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Jan 29 16:25:42.148318 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Jan 29 16:25:42.149713 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Jan 29 16:25:42.149743 systemd[1]: Reached target paths.target - Path Units.
Jan 29 16:25:42.150762 systemd[1]: Reached target timers.target - Timer Units.
Jan 29 16:25:42.152709 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Jan 29 16:25:42.155663 systemd[1]: Starting docker.socket - Docker Socket for the API...
Jan 29 16:25:42.159337 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local).
Jan 29 16:25:42.160933 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK).
Jan 29 16:25:42.162330 systemd[1]: Reached target ssh-access.target - SSH Access Available.
Jan 29 16:25:42.168519 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Jan 29 16:25:42.170043 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket.
Jan 29 16:25:42.172530 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Jan 29 16:25:42.174345 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Jan 29 16:25:42.175648 systemd[1]: Reached target sockets.target - Socket Units.
Jan 29 16:25:42.176742 systemd[1]: Reached target basic.target - Basic System.
Jan 29 16:25:42.177848 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Jan 29 16:25:42.177880 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Jan 29 16:25:42.178947 systemd[1]: Starting containerd.service - containerd container runtime...
Jan 29 16:25:42.181135 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Jan 29 16:25:42.183144 lvm[1483]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jan 29 16:25:42.184466 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Jan 29 16:25:42.188108 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Jan 29 16:25:42.190287 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Jan 29 16:25:42.194077 jq[1486]: false
Jan 29 16:25:42.193353 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Jan 29 16:25:42.199070 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Jan 29 16:25:42.202102 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Jan 29 16:25:42.206193 systemd[1]: Starting systemd-logind.service - User Login Management...
Jan 29 16:25:42.208350 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Jan 29 16:25:42.209126 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Jan 29 16:25:42.209776 systemd[1]: Starting update-engine.service - Update Engine...
Jan 29 16:25:42.212079 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Jan 29 16:25:42.214167 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Jan 29 16:25:42.217894 extend-filesystems[1487]: Found loop3
Jan 29 16:25:42.218961 extend-filesystems[1487]: Found loop4
Jan 29 16:25:42.219555 extend-filesystems[1487]: Found loop5
Jan 29 16:25:42.222511 extend-filesystems[1487]: Found sr0
Jan 29 16:25:42.222511 extend-filesystems[1487]: Found vda
Jan 29 16:25:42.222511 extend-filesystems[1487]: Found vda1
Jan 29 16:25:42.222511 extend-filesystems[1487]: Found vda2
Jan 29 16:25:42.222511 extend-filesystems[1487]: Found vda3
Jan 29 16:25:42.222511 extend-filesystems[1487]: Found usr
Jan 29 16:25:42.222511 extend-filesystems[1487]: Found vda4
Jan 29 16:25:42.222511 extend-filesystems[1487]: Found vda6
Jan 29 16:25:42.222511 extend-filesystems[1487]: Found vda7
Jan 29 16:25:42.222511 extend-filesystems[1487]: Found vda9
Jan 29 16:25:42.222511 extend-filesystems[1487]: Checking size of /dev/vda9
Jan 29 16:25:42.220200 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Jan 29 16:25:42.243505 dbus-daemon[1485]: [system] SELinux support is enabled
Jan 29 16:25:42.248431 update_engine[1495]: I20250129 16:25:42.240775  1495 main.cc:92] Flatcar Update Engine starting
Jan 29 16:25:42.248431 update_engine[1495]: I20250129 16:25:42.247068  1495 update_check_scheduler.cc:74] Next update check in 4m55s
Jan 29 16:25:42.248663 jq[1496]: true
Jan 29 16:25:42.220491 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Jan 29 16:25:42.220831 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Jan 29 16:25:42.221657 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Jan 29 16:25:42.248505 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Jan 29 16:25:42.252426 systemd[1]: motdgen.service: Deactivated successfully.
Jan 29 16:25:42.252725 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Jan 29 16:25:42.253861 jq[1508]: true
Jan 29 16:25:42.259514 (ntainerd)[1511]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Jan 29 16:25:42.260968 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Jan 29 16:25:42.261016 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Jan 29 16:25:42.263076 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Jan 29 16:25:42.263097 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Jan 29 16:25:42.264665 systemd[1]: Started update-engine.service - Update Engine.
Jan 29 16:25:42.276108 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Jan 29 16:25:42.290948 extend-filesystems[1487]: Resized partition /dev/vda9
Jan 29 16:25:42.315943 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1389)
Jan 29 16:25:42.332936 bash[1531]: Updated "/home/core/.ssh/authorized_keys"
Jan 29 16:25:42.332615 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Jan 29 16:25:42.333191 extend-filesystems[1535]: resize2fs 1.47.1 (20-May-2024)
Jan 29 16:25:42.335410 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met.
Jan 29 16:25:42.344169 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks
Jan 29 16:25:42.344049 systemd-logind[1494]: Watching system buttons on /dev/input/event1 (Power Button)
Jan 29 16:25:42.344070 systemd-logind[1494]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Jan 29 16:25:42.353404 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Jan 29 16:25:42.358524 systemd-logind[1494]: New seat seat0.
Jan 29 16:25:42.364272 systemd[1]: Started systemd-logind.service - User Login Management.
Jan 29 16:25:42.366938 kernel: EXT4-fs (vda9): resized filesystem to 1864699
Jan 29 16:25:42.390290 extend-filesystems[1535]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Jan 29 16:25:42.390290 extend-filesystems[1535]: old_desc_blocks = 1, new_desc_blocks = 1
Jan 29 16:25:42.390290 extend-filesystems[1535]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long.
Jan 29 16:25:42.394823 extend-filesystems[1487]: Resized filesystem in /dev/vda9
Jan 29 16:25:42.392395 systemd[1]: extend-filesystems.service: Deactivated successfully.
Jan 29 16:25:42.392688 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Jan 29 16:25:42.488954 containerd[1511]: time="2025-01-29T16:25:42.488852354Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23
Jan 29 16:25:42.512742 containerd[1511]: time="2025-01-29T16:25:42.512680651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jan 29 16:25:42.514423 containerd[1511]: time="2025-01-29T16:25:42.514379838Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jan 29 16:25:42.514423 containerd[1511]: time="2025-01-29T16:25:42.514409593Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jan 29 16:25:42.514423 containerd[1511]: time="2025-01-29T16:25:42.514424421Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jan 29 16:25:42.514641 containerd[1511]: time="2025-01-29T16:25:42.514617103Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jan 29 16:25:42.514641 containerd[1511]: time="2025-01-29T16:25:42.514636980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jan 29 16:25:42.514720 containerd[1511]: time="2025-01-29T16:25:42.514703404Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jan 29 16:25:42.514720 containerd[1511]: time="2025-01-29T16:25:42.514719064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jan 29 16:25:42.514995 containerd[1511]: time="2025-01-29T16:25:42.514975144Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 29 16:25:42.514995 containerd[1511]: time="2025-01-29T16:25:42.514992346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jan 29 16:25:42.515034 containerd[1511]: time="2025-01-29T16:25:42.515004830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Jan 29 16:25:42.515034 containerd[1511]: time="2025-01-29T16:25:42.515014758Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jan 29 16:25:42.515128 containerd[1511]: time="2025-01-29T16:25:42.515111890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jan 29 16:25:42.515376 containerd[1511]: time="2025-01-29T16:25:42.515351049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jan 29 16:25:42.515536 containerd[1511]: time="2025-01-29T16:25:42.515513193Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 29 16:25:42.515536 containerd[1511]: time="2025-01-29T16:25:42.515528983Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jan 29 16:25:42.515642 containerd[1511]: time="2025-01-29T16:25:42.515620063Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jan 29 16:25:42.515692 containerd[1511]: time="2025-01-29T16:25:42.515678122Z" level=info msg="metadata content store policy set" policy=shared
Jan 29 16:25:42.522983 containerd[1511]: time="2025-01-29T16:25:42.522946513Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jan 29 16:25:42.523017 containerd[1511]: time="2025-01-29T16:25:42.522997769Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jan 29 16:25:42.523017 containerd[1511]: time="2025-01-29T16:25:42.523013058Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jan 29 16:25:42.523052 containerd[1511]: time="2025-01-29T16:25:42.523029479Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jan 29 16:25:42.523052 containerd[1511]: time="2025-01-29T16:25:42.523043385Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jan 29 16:25:42.523215 containerd[1511]: time="2025-01-29T16:25:42.523193336Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jan 29 16:25:42.523421 containerd[1511]: time="2025-01-29T16:25:42.523402378Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jan 29 16:25:42.523715 containerd[1511]: time="2025-01-29T16:25:42.523695518Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jan 29 16:25:42.523741 containerd[1511]: time="2025-01-29T16:25:42.523714493Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jan 29 16:25:42.523741 containerd[1511]: time="2025-01-29T16:25:42.523727858Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jan 29 16:25:42.523776 containerd[1511]: time="2025-01-29T16:25:42.523741053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jan 29 16:25:42.523776 containerd[1511]: time="2025-01-29T16:25:42.523753456Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jan 29 16:25:42.523776 containerd[1511]: time="2025-01-29T16:25:42.523765208Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jan 29 16:25:42.523830 containerd[1511]: time="2025-01-29T16:25:42.523778634Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jan 29 16:25:42.523830 containerd[1511]: time="2025-01-29T16:25:42.523790496Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jan 29 16:25:42.523830 containerd[1511]: time="2025-01-29T16:25:42.523802468Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jan 29 16:25:42.523830 containerd[1511]: time="2025-01-29T16:25:42.523817807Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jan 29 16:25:42.523900 containerd[1511]: time="2025-01-29T16:25:42.523830541Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jan 29 16:25:42.523900 containerd[1511]: time="2025-01-29T16:25:42.523850168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jan 29 16:25:42.523900 containerd[1511]: time="2025-01-29T16:25:42.523862551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jan 29 16:25:42.523900 containerd[1511]: time="2025-01-29T16:25:42.523874564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jan 29 16:25:42.523900 containerd[1511]: time="2025-01-29T16:25:42.523886446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jan 29 16:25:42.523900 containerd[1511]: time="2025-01-29T16:25:42.523897677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jan 29 16:25:42.524026 containerd[1511]: time="2025-01-29T16:25:42.523925209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jan 29 16:25:42.524026 containerd[1511]: time="2025-01-29T16:25:42.523938774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jan 29 16:25:42.524026 containerd[1511]: time="2025-01-29T16:25:42.523950256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jan 29 16:25:42.524026 containerd[1511]: time="2025-01-29T16:25:42.523962669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jan 29 16:25:42.524026 containerd[1511]: time="2025-01-29T16:25:42.523976865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jan 29 16:25:42.524026 containerd[1511]: time="2025-01-29T16:25:42.523989629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jan 29 16:25:42.524026 containerd[1511]: time="2025-01-29T16:25:42.524000129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jan 29 16:25:42.524146 containerd[1511]: time="2025-01-29T16:25:42.524028953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jan 29 16:25:42.524146 containerd[1511]: time="2025-01-29T16:25:42.524042859Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jan 29 16:25:42.524146 containerd[1511]: time="2025-01-29T16:25:42.524059981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jan 29 16:25:42.524146 containerd[1511]: time="2025-01-29T16:25:42.524071713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jan 29 16:25:42.524146 containerd[1511]: time="2025-01-29T16:25:42.524082353Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jan 29 16:25:42.524146 containerd[1511]: time="2025-01-29T16:25:42.524122739Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jan 29 16:25:42.524146 containerd[1511]: time="2025-01-29T16:25:42.524137978Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jan 29 16:25:42.524279 containerd[1511]: time="2025-01-29T16:25:42.524147515Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jan 29 16:25:42.524279 containerd[1511]: time="2025-01-29T16:25:42.524158616Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jan 29 16:25:42.524279 containerd[1511]: time="2025-01-29T16:25:42.524167543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jan 29 16:25:42.524279 containerd[1511]: time="2025-01-29T16:25:42.524187130Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jan 29 16:25:42.524279 containerd[1511]: time="2025-01-29T16:25:42.524196437Z" level=info msg="NRI interface is disabled by configuration."
Jan 29 16:25:42.524279 containerd[1511]: time="2025-01-29T16:25:42.524206927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Jan 29 16:25:42.524495 containerd[1511]: time="2025-01-29T16:25:42.524453419Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Jan 29 16:25:42.524495 containerd[1511]: time="2025-01-29T16:25:42.524496510Z" level=info msg="Connect containerd service"
Jan 29 16:25:42.524629 containerd[1511]: time="2025-01-29T16:25:42.524529392Z" level=info msg="using legacy CRI server"
Jan 29 16:25:42.524629 containerd[1511]: time="2025-01-29T16:25:42.524535553Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Jan 29 16:25:42.524670 containerd[1511]: time="2025-01-29T16:25:42.524636122Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Jan 29 16:25:42.525207 containerd[1511]: time="2025-01-29T16:25:42.525183098Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 29 16:25:42.525486 containerd[1511]: time="2025-01-29T16:25:42.525451391Z" level=info msg="Start subscribing containerd event"
Jan 29 16:25:42.525507 containerd[1511]: time="2025-01-29T16:25:42.525495093Z" level=info msg="Start recovering state"
Jan 29 16:25:42.525561 containerd[1511]: time="2025-01-29T16:25:42.525545607Z" level=info msg="Start event monitor"
Jan 29 16:25:42.525582 containerd[1511]: time="2025-01-29T16:25:42.525563541Z" level=info msg="Start snapshots syncer"
Jan 29 16:25:42.525582 containerd[1511]: time="2025-01-29T16:25:42.525571336Z" level=info msg="Start cni network conf syncer for default"
Jan 29 16:25:42.525582 containerd[1511]: time="2025-01-29T16:25:42.525579902Z" level=info msg="Start streaming server"
Jan 29 16:25:42.525809 containerd[1511]: time="2025-01-29T16:25:42.525789365Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Jan 29 16:25:42.525853 containerd[1511]: time="2025-01-29T16:25:42.525841282Z" level=info msg=serving... address=/run/containerd/containerd.sock
Jan 29 16:25:42.527926 containerd[1511]: time="2025-01-29T16:25:42.525897257Z" level=info msg="containerd successfully booted in 0.038424s"
Jan 29 16:25:42.525978 systemd[1]: Started containerd.service - containerd container runtime.
Jan 29 16:25:42.671737 sshd_keygen[1510]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Jan 29 16:25:42.694220 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Jan 29 16:25:42.710107 systemd[1]: Starting issuegen.service - Generate /run/issue...
Jan 29 16:25:42.717262 systemd[1]: issuegen.service: Deactivated successfully.
Jan 29 16:25:42.717508 systemd[1]: Finished issuegen.service - Generate /run/issue.
Jan 29 16:25:42.720187 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Jan 29 16:25:42.734336 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Jan 29 16:25:42.737288 systemd[1]: Started getty@tty1.service - Getty on tty1.
Jan 29 16:25:42.739458 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0.
Jan 29 16:25:42.741145 systemd[1]: Reached target getty.target - Login Prompts.
Jan 29 16:25:42.794984 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Jan 29 16:25:42.797300 systemd[1]: Started sshd@0-10.0.0.148:22-10.0.0.1:48478.service - OpenSSH per-connection server daemon (10.0.0.1:48478).
Jan 29 16:25:42.851379 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 48478 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ
Jan 29 16:25:42.852991 sshd-session[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:25:42.864270 systemd-logind[1494]: New session 1 of user core.
Jan 29 16:25:42.865666 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Jan 29 16:25:42.877142 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Jan 29 16:25:42.887940 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Jan 29 16:25:42.899169 systemd[1]: Starting user@500.service - User Manager for UID 500...
Jan 29 16:25:42.902869 (systemd)[1574]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Jan 29 16:25:42.905219 systemd-logind[1494]: New session c1 of user core.
Jan 29 16:25:43.045066 systemd[1574]: Queued start job for default target default.target.
Jan 29 16:25:43.056144 systemd[1574]: Created slice app.slice - User Application Slice.
Jan 29 16:25:43.056180 systemd[1574]: Reached target paths.target - Paths.
Jan 29 16:25:43.056219 systemd[1574]: Reached target timers.target - Timers.
Jan 29 16:25:43.057666 systemd[1574]: Starting dbus.socket - D-Bus User Message Bus Socket...
Jan 29 16:25:43.068837 systemd[1574]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Jan 29 16:25:43.068968 systemd[1574]: Reached target sockets.target - Sockets.
Jan 29 16:25:43.069007 systemd[1574]: Reached target basic.target - Basic System.
Jan 29 16:25:43.069047 systemd[1574]: Reached target default.target - Main User Target.
Jan 29 16:25:43.069076 systemd[1574]: Startup finished in 157ms.
Jan 29 16:25:43.069460 systemd[1]: Started user@500.service - User Manager for UID 500.
Jan 29 16:25:43.072473 systemd[1]: Started session-1.scope - Session 1 of User core.
Jan 29 16:25:43.135174 systemd[1]: Started sshd@1-10.0.0.148:22-10.0.0.1:48494.service - OpenSSH per-connection server daemon (10.0.0.1:48494).
Jan 29 16:25:43.180475 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 48494 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ
Jan 29 16:25:43.181744 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:25:43.185601 systemd-logind[1494]: New session 2 of user core.
Jan 29 16:25:43.192028 systemd[1]: Started session-2.scope - Session 2 of User core.
Jan 29 16:25:43.245127 sshd[1587]: Connection closed by 10.0.0.1 port 48494
Jan 29 16:25:43.245534 sshd-session[1585]: pam_unix(sshd:session): session closed for user core
Jan 29 16:25:43.259247 systemd[1]: sshd@1-10.0.0.148:22-10.0.0.1:48494.service: Deactivated successfully.
Jan 29 16:25:43.261025 systemd[1]: session-2.scope: Deactivated successfully.
Jan 29 16:25:43.262243 systemd-logind[1494]: Session 2 logged out. Waiting for processes to exit.
Jan 29 16:25:43.263360 systemd[1]: Started sshd@2-10.0.0.148:22-10.0.0.1:48510.service - OpenSSH per-connection server daemon (10.0.0.1:48510).
Jan 29 16:25:43.269416 systemd-logind[1494]: Removed session 2.
Jan 29 16:25:43.303859 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 48510 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ
Jan 29 16:25:43.305146 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:25:43.309354 systemd-logind[1494]: New session 3 of user core.
Jan 29 16:25:43.320025 systemd[1]: Started session-3.scope - Session 3 of User core.
Jan 29 16:25:43.384607 sshd[1595]: Connection closed by 10.0.0.1 port 48510
Jan 29 16:25:43.384999 sshd-session[1592]: pam_unix(sshd:session): session closed for user core
Jan 29 16:25:43.389064 systemd[1]: sshd@2-10.0.0.148:22-10.0.0.1:48510.service: Deactivated successfully.
Jan 29 16:25:43.390939 systemd[1]: session-3.scope: Deactivated successfully.
Jan 29 16:25:43.391556 systemd-logind[1494]: Session 3 logged out. Waiting for processes to exit.
Jan 29 16:25:43.392547 systemd-logind[1494]: Removed session 3.
Jan 29 16:25:43.491109 systemd-networkd[1445]: eth0: Gained IPv6LL
Jan 29 16:25:43.494679 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Jan 29 16:25:43.496605 systemd[1]: Reached target network-online.target - Network is Online.
Jan 29 16:25:43.510283 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent...
Jan 29 16:25:43.512908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 29 16:25:43.515130 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Jan 29 16:25:43.533890 systemd[1]: coreos-metadata.service: Deactivated successfully.
Jan 29 16:25:43.534326 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent.
Jan 29 16:25:43.536290 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Jan 29 16:25:43.539261 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Jan 29 16:25:44.231681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 29 16:25:44.233666 systemd[1]: Reached target multi-user.target - Multi-User System.
Jan 29 16:25:44.235056 systemd[1]: Startup finished in 677ms (kernel) + 5.910s (initrd) + 4.451s (userspace) = 11.039s.
Jan 29 16:25:44.236437 (kubelet)[1622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 29 16:25:44.658650 kubelet[1622]: E0129 16:25:44.658490    1622 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 29 16:25:44.662408 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 29 16:25:44.662617 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 29 16:25:44.663006 systemd[1]: kubelet.service: Consumed 1.010s CPU time, 253.7M memory peak.
Jan 29 16:25:53.397621 systemd[1]: Started sshd@3-10.0.0.148:22-10.0.0.1:54600.service - OpenSSH per-connection server daemon (10.0.0.1:54600).
Jan 29 16:25:53.440616 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 54600 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ
Jan 29 16:25:53.442357 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:25:53.446761 systemd-logind[1494]: New session 4 of user core.
Jan 29 16:25:53.464114 systemd[1]: Started session-4.scope - Session 4 of User core.
Jan 29 16:25:53.520844 sshd[1637]: Connection closed by 10.0.0.1 port 54600
Jan 29 16:25:53.521345 sshd-session[1635]: pam_unix(sshd:session): session closed for user core
Jan 29 16:25:53.529347 systemd[1]: sshd@3-10.0.0.148:22-10.0.0.1:54600.service: Deactivated successfully.
Jan 29 16:25:53.531393 systemd[1]: session-4.scope: Deactivated successfully.
Jan 29 16:25:53.533052 systemd-logind[1494]: Session 4 logged out. Waiting for processes to exit.
Jan 29 16:25:53.541249 systemd[1]: Started sshd@4-10.0.0.148:22-10.0.0.1:54612.service - OpenSSH per-connection server daemon (10.0.0.1:54612).
Jan 29 16:25:53.542200 systemd-logind[1494]: Removed session 4.
Jan 29 16:25:53.579417 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 54612 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ
Jan 29 16:25:53.581330 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:25:53.586006 systemd-logind[1494]: New session 5 of user core.
Jan 29 16:25:53.598078 systemd[1]: Started session-5.scope - Session 5 of User core.
Jan 29 16:25:53.647830 sshd[1645]: Connection closed by 10.0.0.1 port 54612
Jan 29 16:25:53.648207 sshd-session[1642]: pam_unix(sshd:session): session closed for user core
Jan 29 16:25:53.661790 systemd[1]: sshd@4-10.0.0.148:22-10.0.0.1:54612.service: Deactivated successfully.
Jan 29 16:25:53.663818 systemd[1]: session-5.scope: Deactivated successfully.
Jan 29 16:25:53.665226 systemd-logind[1494]: Session 5 logged out. Waiting for processes to exit.
Jan 29 16:25:53.666417 systemd[1]: Started sshd@5-10.0.0.148:22-10.0.0.1:54618.service - OpenSSH per-connection server daemon (10.0.0.1:54618).
Jan 29 16:25:53.667217 systemd-logind[1494]: Removed session 5.
Jan 29 16:25:53.718528 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 54618 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ
Jan 29 16:25:53.719979 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:25:53.723983 systemd-logind[1494]: New session 6 of user core.
Jan 29 16:25:53.734031 systemd[1]: Started session-6.scope - Session 6 of User core.
Jan 29 16:25:53.787452 sshd[1653]: Connection closed by 10.0.0.1 port 54618
Jan 29 16:25:53.787815 sshd-session[1650]: pam_unix(sshd:session): session closed for user core
Jan 29 16:25:53.804727 systemd[1]: sshd@5-10.0.0.148:22-10.0.0.1:54618.service: Deactivated successfully.
Jan 29 16:25:53.806647 systemd[1]: session-6.scope: Deactivated successfully.
Jan 29 16:25:53.808274 systemd-logind[1494]: Session 6 logged out. Waiting for processes to exit.
Jan 29 16:25:53.824148 systemd[1]: Started sshd@6-10.0.0.148:22-10.0.0.1:54624.service - OpenSSH per-connection server daemon (10.0.0.1:54624).
Jan 29 16:25:53.825084 systemd-logind[1494]: Removed session 6.
Jan 29 16:25:53.860776 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 54624 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ
Jan 29 16:25:53.862057 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:25:53.866485 systemd-logind[1494]: New session 7 of user core.
Jan 29 16:25:53.877066 systemd[1]: Started session-7.scope - Session 7 of User core.
Jan 29 16:25:54.074865 sudo[1662]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Jan 29 16:25:54.075222 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 29 16:25:54.091790 sudo[1662]: pam_unix(sudo:session): session closed for user root
Jan 29 16:25:54.093670 sshd[1661]: Connection closed by 10.0.0.1 port 54624
Jan 29 16:25:54.094189 sshd-session[1658]: pam_unix(sshd:session): session closed for user core
Jan 29 16:25:54.112362 systemd[1]: sshd@6-10.0.0.148:22-10.0.0.1:54624.service: Deactivated successfully.
Jan 29 16:25:54.113955 systemd[1]: session-7.scope: Deactivated successfully.
Jan 29 16:25:54.115548 systemd-logind[1494]: Session 7 logged out. Waiting for processes to exit.
Jan 29 16:25:54.122204 systemd[1]: Started sshd@7-10.0.0.148:22-10.0.0.1:54626.service - OpenSSH per-connection server daemon (10.0.0.1:54626).
Jan 29 16:25:54.123270 systemd-logind[1494]: Removed session 7.
Jan 29 16:25:54.158748 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 54626 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ
Jan 29 16:25:54.160120 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:25:54.164777 systemd-logind[1494]: New session 8 of user core.
Jan 29 16:25:54.177055 systemd[1]: Started session-8.scope - Session 8 of User core.
Jan 29 16:25:54.230335 sudo[1672]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Jan 29 16:25:54.230725 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 29 16:25:54.233869 sudo[1672]: pam_unix(sudo:session): session closed for user root
Jan 29 16:25:54.239422 sudo[1671]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules
Jan 29 16:25:54.239780 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 29 16:25:54.260215 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Jan 29 16:25:54.290797 augenrules[1694]: No rules
Jan 29 16:25:54.292099 systemd[1]: audit-rules.service: Deactivated successfully.
Jan 29 16:25:54.292383 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Jan 29 16:25:54.293479 sudo[1671]: pam_unix(sudo:session): session closed for user root
Jan 29 16:25:54.294952 sshd[1670]: Connection closed by 10.0.0.1 port 54626
Jan 29 16:25:54.295277 sshd-session[1667]: pam_unix(sshd:session): session closed for user core
Jan 29 16:25:54.304414 systemd[1]: sshd@7-10.0.0.148:22-10.0.0.1:54626.service: Deactivated successfully.
Jan 29 16:25:54.305984 systemd[1]: session-8.scope: Deactivated successfully.
Jan 29 16:25:54.307571 systemd-logind[1494]: Session 8 logged out. Waiting for processes to exit.
Jan 29 16:25:54.317131 systemd[1]: Started sshd@8-10.0.0.148:22-10.0.0.1:54636.service - OpenSSH per-connection server daemon (10.0.0.1:54636).
Jan 29 16:25:54.318103 systemd-logind[1494]: Removed session 8.
Jan 29 16:25:54.353009 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 54636 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ
Jan 29 16:25:54.354533 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:25:54.358744 systemd-logind[1494]: New session 9 of user core.
Jan 29 16:25:54.366041 systemd[1]: Started session-9.scope - Session 9 of User core.
Jan 29 16:25:54.419238 sudo[1706]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Jan 29 16:25:54.419624 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 29 16:25:54.441175 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent...
Jan 29 16:25:54.458717 systemd[1]: coreos-metadata.service: Deactivated successfully.
Jan 29 16:25:54.459061 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent.
Jan 29 16:25:54.913286 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Jan 29 16:25:54.924108 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 29 16:25:55.129754 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
Jan 29 16:25:55.129874 systemd[1]: kubelet.service: Failed with result 'signal'.
Jan 29 16:25:55.130206 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 29 16:25:55.142241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 29 16:25:55.166460 systemd[1]: Reload requested from client PID 1752 ('systemctl') (unit session-9.scope)...
Jan 29 16:25:55.166475 systemd[1]: Reloading...
Jan 29 16:25:55.287015 zram_generator::config[1795]: No configuration found.
Jan 29 16:25:55.711121 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 29 16:25:55.812906 systemd[1]: Reloading finished in 645 ms.
Jan 29 16:25:55.858184 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 29 16:25:55.860282 systemd[1]: kubelet.service: Deactivated successfully.
Jan 29 16:25:55.860542 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 29 16:25:55.860583 systemd[1]: kubelet.service: Consumed 205ms CPU time, 91.9M memory peak.
Jan 29 16:25:55.862129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 29 16:25:56.039139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 29 16:25:56.043148 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Jan 29 16:25:56.127710 kubelet[1845]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 29 16:25:56.127710 kubelet[1845]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI.
Jan 29 16:25:56.127710 kubelet[1845]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 29 16:25:56.128158 kubelet[1845]: I0129 16:25:56.127766    1845 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Jan 29 16:25:57.165146 kubelet[1845]: I0129 16:25:57.165098    1845 server.go:520] "Kubelet version" kubeletVersion="v1.32.0"
Jan 29 16:25:57.165146 kubelet[1845]: I0129 16:25:57.165130    1845 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jan 29 16:25:57.165616 kubelet[1845]: I0129 16:25:57.165452    1845 server.go:954] "Client rotation is on, will bootstrap in background"
Jan 29 16:25:57.188186 kubelet[1845]: I0129 16:25:57.188127    1845 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jan 29 16:25:57.195892 kubelet[1845]: E0129 16:25:57.195827    1845 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Jan 29 16:25:57.195892 kubelet[1845]: I0129 16:25:57.195889    1845 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Jan 29 16:25:57.202393 kubelet[1845]: I0129 16:25:57.202355    1845 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jan 29 16:25:57.203208 kubelet[1845]: I0129 16:25:57.203163    1845 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jan 29 16:25:57.203418 kubelet[1845]: I0129 16:25:57.203203    1845 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.148","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Jan 29 16:25:57.203537 kubelet[1845]: I0129 16:25:57.203417    1845 topology_manager.go:138] "Creating topology manager with none policy"
Jan 29 16:25:57.203537 kubelet[1845]: I0129 16:25:57.203430    1845 container_manager_linux.go:304] "Creating device plugin manager"
Jan 29 16:25:57.203623 kubelet[1845]: I0129 16:25:57.203596    1845 state_mem.go:36] "Initialized new in-memory state store"
Jan 29 16:25:57.207321 kubelet[1845]: I0129 16:25:57.207288    1845 kubelet.go:446] "Attempting to sync node with API server"
Jan 29 16:25:57.207321 kubelet[1845]: I0129 16:25:57.207313    1845 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
Jan 29 16:25:57.207383 kubelet[1845]: I0129 16:25:57.207333    1845 kubelet.go:352] "Adding apiserver pod source"
Jan 29 16:25:57.207456 kubelet[1845]: E0129 16:25:57.207432    1845 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:25:57.207479 kubelet[1845]: I0129 16:25:57.207466    1845 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jan 29 16:25:57.208474 kubelet[1845]: E0129 16:25:57.208429    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:25:57.210427 kubelet[1845]: I0129 16:25:57.210407    1845 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Jan 29 16:25:57.210792 kubelet[1845]: I0129 16:25:57.210778    1845 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Jan 29 16:25:57.211338 kubelet[1845]: W0129 16:25:57.211306    1845 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Jan 29 16:25:57.213118 kubelet[1845]: W0129 16:25:57.213090    1845 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.148" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Jan 29 16:25:57.213163 kubelet[1845]: E0129 16:25:57.213141    1845 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.148\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
Jan 29 16:25:57.213725 kubelet[1845]: I0129 16:25:57.213687    1845 watchdog_linux.go:99] "Systemd watchdog is not enabled"
Jan 29 16:25:57.213784 kubelet[1845]: I0129 16:25:57.213756    1845 server.go:1287] "Started kubelet"
Jan 29 16:25:57.213936 kubelet[1845]: I0129 16:25:57.213837    1845 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
Jan 29 16:25:57.216873 kubelet[1845]: I0129 16:25:57.216657    1845 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jan 29 16:25:57.217115 kubelet[1845]: I0129 16:25:57.217072    1845 server.go:490] "Adding debug handlers to kubelet server"
Jan 29 16:25:57.218372 kubelet[1845]: I0129 16:25:57.218351    1845 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Jan 29 16:25:57.220104 kubelet[1845]: W0129 16:25:57.218749    1845 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Jan 29 16:25:57.220104 kubelet[1845]: E0129 16:25:57.218790    1845 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
Jan 29 16:25:57.220104 kubelet[1845]: I0129 16:25:57.218831    1845 volume_manager.go:297] "Starting Kubelet Volume Manager"
Jan 29 16:25:57.220104 kubelet[1845]: I0129 16:25:57.219003    1845 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Jan 29 16:25:57.220104 kubelet[1845]: I0129 16:25:57.219079    1845 reconciler.go:26] "Reconciler: start to sync state"
Jan 29 16:25:57.220104 kubelet[1845]: I0129 16:25:57.219486    1845 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Jan 29 16:25:57.220104 kubelet[1845]: I0129 16:25:57.219716    1845 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Jan 29 16:25:57.260653 kubelet[1845]: E0129 16:25:57.221281    1845 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.148\" not found"
Jan 29 16:25:57.262100 kubelet[1845]: E0129 16:25:57.262065    1845 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jan 29 16:25:57.263675 kubelet[1845]: I0129 16:25:57.263630    1845 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Jan 29 16:25:57.264735 kubelet[1845]: I0129 16:25:57.264712    1845 factory.go:221] Registration of the containerd container factory successfully
Jan 29 16:25:57.264735 kubelet[1845]: I0129 16:25:57.264726    1845 factory.go:221] Registration of the systemd container factory successfully
Jan 29 16:25:57.280378 kubelet[1845]: E0129 16:25:57.280334    1845 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.148\" not found" node="10.0.0.148"
Jan 29 16:25:57.281176 kubelet[1845]: I0129 16:25:57.281148    1845 cpu_manager.go:221] "Starting CPU manager" policy="none"
Jan 29 16:25:57.281176 kubelet[1845]: I0129 16:25:57.281164    1845 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
Jan 29 16:25:57.281251 kubelet[1845]: I0129 16:25:57.281190    1845 state_mem.go:36] "Initialized new in-memory state store"
Jan 29 16:25:57.361563 kubelet[1845]: E0129 16:25:57.361509    1845 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.148\" not found"
Jan 29 16:25:57.462256 kubelet[1845]: E0129 16:25:57.462096    1845 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.148\" not found"
Jan 29 16:25:57.562569 kubelet[1845]: E0129 16:25:57.562509    1845 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.148\" not found"
Jan 29 16:25:57.663231 kubelet[1845]: E0129 16:25:57.663152    1845 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.148\" not found"
Jan 29 16:25:57.709881 kubelet[1845]: I0129 16:25:57.709740    1845 policy_none.go:49] "None policy: Start"
Jan 29 16:25:57.709881 kubelet[1845]: I0129 16:25:57.709814    1845 memory_manager.go:186] "Starting memorymanager" policy="None"
Jan 29 16:25:57.709881 kubelet[1845]: I0129 16:25:57.709834    1845 state_mem.go:35] "Initializing new in-memory state store"
Jan 29 16:25:57.722676 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Jan 29 16:25:57.738859 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Jan 29 16:25:57.758962 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Jan 29 16:25:57.760399 kubelet[1845]: I0129 16:25:57.760199    1845 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jan 29 16:25:57.760453 kubelet[1845]: I0129 16:25:57.760436    1845 eviction_manager.go:189] "Eviction manager: starting control loop"
Jan 29 16:25:57.762795 kubelet[1845]: I0129 16:25:57.760456    1845 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Jan 29 16:25:57.762795 kubelet[1845]: I0129 16:25:57.760803    1845 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Jan 29 16:25:57.762795 kubelet[1845]: E0129 16:25:57.762625    1845 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
Jan 29 16:25:57.762795 kubelet[1845]: E0129 16:25:57.762662    1845 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.148\" not found"
Jan 29 16:25:57.791429 kubelet[1845]: I0129 16:25:57.791358    1845 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Jan 29 16:25:57.793509 kubelet[1845]: I0129 16:25:57.793452    1845 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Jan 29 16:25:57.793550 kubelet[1845]: I0129 16:25:57.793514    1845 status_manager.go:227] "Starting to sync pod status with apiserver"
Jan 29 16:25:57.793550 kubelet[1845]: I0129 16:25:57.793540    1845 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
Jan 29 16:25:57.793595 kubelet[1845]: I0129 16:25:57.793549    1845 kubelet.go:2388] "Starting kubelet main sync loop"
Jan 29 16:25:57.793988 kubelet[1845]: E0129 16:25:57.793705    1845 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Jan 29 16:25:57.863731 kubelet[1845]: I0129 16:25:57.863684    1845 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.148"
Jan 29 16:25:57.868209 kubelet[1845]: I0129 16:25:57.868180    1845 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.148"
Jan 29 16:25:57.868209 kubelet[1845]: E0129 16:25:57.868209    1845 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.148\": node \"10.0.0.148\" not found"
Jan 29 16:25:57.874316 kubelet[1845]: E0129 16:25:57.874288    1845 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.148\" not found"
Jan 29 16:25:57.975507 kubelet[1845]: E0129 16:25:57.975374    1845 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.148\" not found"
Jan 29 16:25:58.063415 sudo[1706]: pam_unix(sudo:session): session closed for user root
Jan 29 16:25:58.064755 sshd[1705]: Connection closed by 10.0.0.1 port 54636
Jan 29 16:25:58.065185 sshd-session[1702]: pam_unix(sshd:session): session closed for user core
Jan 29 16:25:58.069604 systemd[1]: sshd@8-10.0.0.148:22-10.0.0.1:54636.service: Deactivated successfully.
Jan 29 16:25:58.071691 systemd[1]: session-9.scope: Deactivated successfully.
Jan 29 16:25:58.071921 systemd[1]: session-9.scope: Consumed 754ms CPU time, 76.1M memory peak.
Jan 29 16:25:58.073155 systemd-logind[1494]: Session 9 logged out. Waiting for processes to exit.
Jan 29 16:25:58.074042 systemd-logind[1494]: Removed session 9.
Jan 29 16:25:58.076283 kubelet[1845]: E0129 16:25:58.076247    1845 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.148\" not found"
Jan 29 16:25:58.168824 kubelet[1845]: I0129 16:25:58.168726    1845 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials"
Jan 29 16:25:58.169317 kubelet[1845]: W0129 16:25:58.169047    1845 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Jan 29 16:25:58.169317 kubelet[1845]: W0129 16:25:58.169061    1845 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Jan 29 16:25:58.169317 kubelet[1845]: W0129 16:25:58.169090    1845 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Jan 29 16:25:58.176987 kubelet[1845]: E0129 16:25:58.176900    1845 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.148\" not found"
Jan 29 16:25:58.209400 kubelet[1845]: E0129 16:25:58.209317    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:25:58.278257 kubelet[1845]: E0129 16:25:58.278108    1845 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.148\" not found"
Jan 29 16:25:58.379373 kubelet[1845]: I0129 16:25:58.379333    1845 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24"
Jan 29 16:25:58.379667 containerd[1511]: time="2025-01-29T16:25:58.379630556Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Jan 29 16:25:58.380056 kubelet[1845]: I0129 16:25:58.379815    1845 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24"
Jan 29 16:25:59.210485 kubelet[1845]: I0129 16:25:59.210434    1845 apiserver.go:52] "Watching apiserver"
Jan 29 16:25:59.210485 kubelet[1845]: E0129 16:25:59.210468    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:25:59.219244 systemd[1]: Created slice kubepods-besteffort-pod9d37b602_c1f7_4fe3_bdf1_5e421ab718c9.slice - libcontainer container kubepods-besteffort-pod9d37b602_c1f7_4fe3_bdf1_5e421ab718c9.slice.
Jan 29 16:25:59.220531 kubelet[1845]: I0129 16:25:59.220507    1845 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
Jan 29 16:25:59.230775 kubelet[1845]: I0129 16:25:59.230740    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-host-proc-sys-net\") pod \"cilium-fldkk\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") " pod="kube-system/cilium-fldkk"
Jan 29 16:25:59.230822 kubelet[1845]: I0129 16:25:59.230775    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-host-proc-sys-kernel\") pod \"cilium-fldkk\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") " pod="kube-system/cilium-fldkk"
Jan 29 16:25:59.230822 kubelet[1845]: I0129 16:25:59.230796    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d37b602-c1f7-4fe3-bdf1-5e421ab718c9-kube-proxy\") pod \"kube-proxy-jpfxv\" (UID: \"9d37b602-c1f7-4fe3-bdf1-5e421ab718c9\") " pod="kube-system/kube-proxy-jpfxv"
Jan 29 16:25:59.230880 kubelet[1845]: I0129 16:25:59.230863    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d37b602-c1f7-4fe3-bdf1-5e421ab718c9-xtables-lock\") pod \"kube-proxy-jpfxv\" (UID: \"9d37b602-c1f7-4fe3-bdf1-5e421ab718c9\") " pod="kube-system/kube-proxy-jpfxv"
Jan 29 16:25:59.231181 kubelet[1845]: I0129 16:25:59.231070    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-cilium-run\") pod \"cilium-fldkk\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") " pod="kube-system/cilium-fldkk"
Jan 29 16:25:59.231548 kubelet[1845]: I0129 16:25:59.231309    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-hostproc\") pod \"cilium-fldkk\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") " pod="kube-system/cilium-fldkk"
Jan 29 16:25:59.231548 kubelet[1845]: I0129 16:25:59.231358    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-hubble-tls\") pod \"cilium-fldkk\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") " pod="kube-system/cilium-fldkk"
Jan 29 16:25:59.231548 kubelet[1845]: I0129 16:25:59.231384    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d37b602-c1f7-4fe3-bdf1-5e421ab718c9-lib-modules\") pod \"kube-proxy-jpfxv\" (UID: \"9d37b602-c1f7-4fe3-bdf1-5e421ab718c9\") " pod="kube-system/kube-proxy-jpfxv"
Jan 29 16:25:59.231548 kubelet[1845]: I0129 16:25:59.231446    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb2gb\" (UniqueName: \"kubernetes.io/projected/9d37b602-c1f7-4fe3-bdf1-5e421ab718c9-kube-api-access-pb2gb\") pod \"kube-proxy-jpfxv\" (UID: \"9d37b602-c1f7-4fe3-bdf1-5e421ab718c9\") " pod="kube-system/kube-proxy-jpfxv"
Jan 29 16:25:59.231548 kubelet[1845]: I0129 16:25:59.231481    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-cilium-cgroup\") pod \"cilium-fldkk\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") " pod="kube-system/cilium-fldkk"
Jan 29 16:25:59.231548 kubelet[1845]: I0129 16:25:59.231519    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-clustermesh-secrets\") pod \"cilium-fldkk\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") " pod="kube-system/cilium-fldkk"
Jan 29 16:25:59.231693 kubelet[1845]: I0129 16:25:59.231581    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-cni-path\") pod \"cilium-fldkk\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") " pod="kube-system/cilium-fldkk"
Jan 29 16:25:59.231693 kubelet[1845]: I0129 16:25:59.231613    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-cilium-config-path\") pod \"cilium-fldkk\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") " pod="kube-system/cilium-fldkk"
Jan 29 16:25:59.231693 kubelet[1845]: I0129 16:25:59.231639    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-lib-modules\") pod \"cilium-fldkk\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") " pod="kube-system/cilium-fldkk"
Jan 29 16:25:59.231693 kubelet[1845]: I0129 16:25:59.231658    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-xtables-lock\") pod \"cilium-fldkk\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") " pod="kube-system/cilium-fldkk"
Jan 29 16:25:59.231693 kubelet[1845]: I0129 16:25:59.231687    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9tp2\" (UniqueName: \"kubernetes.io/projected/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-kube-api-access-r9tp2\") pod \"cilium-fldkk\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") " pod="kube-system/cilium-fldkk"
Jan 29 16:25:59.231828 kubelet[1845]: I0129 16:25:59.231736    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-bpf-maps\") pod \"cilium-fldkk\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") " pod="kube-system/cilium-fldkk"
Jan 29 16:25:59.231828 kubelet[1845]: I0129 16:25:59.231814    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-etc-cni-netd\") pod \"cilium-fldkk\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") " pod="kube-system/cilium-fldkk"
Jan 29 16:25:59.237637 systemd[1]: Created slice kubepods-burstable-pod4c8b48e7_e1fe_428d_96e5_4c39db533bf5.slice - libcontainer container kubepods-burstable-pod4c8b48e7_e1fe_428d_96e5_4c39db533bf5.slice.
Jan 29 16:25:59.535806 containerd[1511]: time="2025-01-29T16:25:59.535691072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jpfxv,Uid:9d37b602-c1f7-4fe3-bdf1-5e421ab718c9,Namespace:kube-system,Attempt:0,}"
Jan 29 16:25:59.552192 containerd[1511]: time="2025-01-29T16:25:59.552170447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fldkk,Uid:4c8b48e7-e1fe-428d-96e5-4c39db533bf5,Namespace:kube-system,Attempt:0,}"
Jan 29 16:26:00.211014 kubelet[1845]: E0129 16:26:00.210963    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:01.211766 kubelet[1845]: E0129 16:26:01.211729    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:02.212258 kubelet[1845]: E0129 16:26:02.212194    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:03.212439 kubelet[1845]: E0129 16:26:03.212323    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:04.212552 kubelet[1845]: E0129 16:26:04.212480    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:05.212926 kubelet[1845]: E0129 16:26:05.212873    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:05.697471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2363480682.mount: Deactivated successfully.
Jan 29 16:26:05.706546 containerd[1511]: time="2025-01-29T16:26:05.706502356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 29 16:26:05.708163 containerd[1511]: time="2025-01-29T16:26:05.708104180Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056"
Jan 29 16:26:05.709063 containerd[1511]: time="2025-01-29T16:26:05.709030767Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 29 16:26:05.709895 containerd[1511]: time="2025-01-29T16:26:05.709859742Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 29 16:26:05.710835 containerd[1511]: time="2025-01-29T16:26:05.710799284Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Jan 29 16:26:05.712778 containerd[1511]: time="2025-01-29T16:26:05.712749561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 29 16:26:05.713513 containerd[1511]: time="2025-01-29T16:26:05.713483768Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 6.161262516s"
Jan 29 16:26:05.715515 containerd[1511]: time="2025-01-29T16:26:05.715485783Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 6.179693611s"
Jan 29 16:26:05.827575 containerd[1511]: time="2025-01-29T16:26:05.827408257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 29 16:26:05.827575 containerd[1511]: time="2025-01-29T16:26:05.827467158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 29 16:26:05.827575 containerd[1511]: time="2025-01-29T16:26:05.827498286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:26:05.828105 containerd[1511]: time="2025-01-29T16:26:05.826397993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 29 16:26:05.828105 containerd[1511]: time="2025-01-29T16:26:05.828074176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 29 16:26:05.828105 containerd[1511]: time="2025-01-29T16:26:05.828086600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:26:05.828249 containerd[1511]: time="2025-01-29T16:26:05.828162131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:26:05.828636 containerd[1511]: time="2025-01-29T16:26:05.828582149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:26:05.897067 systemd[1]: Started cri-containerd-8ba2293524caf8b1616a0c804aa2512d92866f059e7417ea70ce4fae520f20ec.scope - libcontainer container 8ba2293524caf8b1616a0c804aa2512d92866f059e7417ea70ce4fae520f20ec.
Jan 29 16:26:05.899110 systemd[1]: Started cri-containerd-bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f.scope - libcontainer container bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f.
Jan 29 16:26:05.921008 containerd[1511]: time="2025-01-29T16:26:05.920878618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fldkk,Uid:4c8b48e7-e1fe-428d-96e5-4c39db533bf5,Namespace:kube-system,Attempt:0,} returns sandbox id \"bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f\""
Jan 29 16:26:05.923053 containerd[1511]: time="2025-01-29T16:26:05.923019272Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Jan 29 16:26:05.926097 containerd[1511]: time="2025-01-29T16:26:05.926060776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jpfxv,Uid:9d37b602-c1f7-4fe3-bdf1-5e421ab718c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ba2293524caf8b1616a0c804aa2512d92866f059e7417ea70ce4fae520f20ec\""
Jan 29 16:26:06.213210 kubelet[1845]: E0129 16:26:06.213175    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:07.213876 kubelet[1845]: E0129 16:26:07.213826    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:08.214873 kubelet[1845]: E0129 16:26:08.214818    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:09.215147 kubelet[1845]: E0129 16:26:09.215102    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:10.215936 kubelet[1845]: E0129 16:26:10.215837    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:11.217093 kubelet[1845]: E0129 16:26:11.216994    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:12.217866 kubelet[1845]: E0129 16:26:12.217786    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:13.220402 kubelet[1845]: E0129 16:26:13.218838    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:13.903678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1831921252.mount: Deactivated successfully.
Jan 29 16:26:14.219938 kubelet[1845]: E0129 16:26:14.219801    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:15.221017 kubelet[1845]: E0129 16:26:15.220944    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:16.222000 kubelet[1845]: E0129 16:26:16.221955    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:17.207674 kubelet[1845]: E0129 16:26:17.207624    1845 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:17.223093 kubelet[1845]: E0129 16:26:17.223053    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:17.658901 containerd[1511]: time="2025-01-29T16:26:17.658792623Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:26:17.659573 containerd[1511]: time="2025-01-29T16:26:17.659540545Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503"
Jan 29 16:26:17.660699 containerd[1511]: time="2025-01-29T16:26:17.660668946Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:26:17.662248 containerd[1511]: time="2025-01-29T16:26:17.662219333Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.739160707s"
Jan 29 16:26:17.662300 containerd[1511]: time="2025-01-29T16:26:17.662251325Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\""
Jan 29 16:26:17.663558 containerd[1511]: time="2025-01-29T16:26:17.663534842Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\""
Jan 29 16:26:17.664473 containerd[1511]: time="2025-01-29T16:26:17.664444604Z" level=info msg="CreateContainer within sandbox \"bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Jan 29 16:26:17.679463 containerd[1511]: time="2025-01-29T16:26:17.679425705Z" level=info msg="CreateContainer within sandbox \"bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6\""
Jan 29 16:26:17.680026 containerd[1511]: time="2025-01-29T16:26:17.679984805Z" level=info msg="StartContainer for \"06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6\""
Jan 29 16:26:17.721039 systemd[1]: Started cri-containerd-06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6.scope - libcontainer container 06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6.
Jan 29 16:26:17.768983 containerd[1511]: time="2025-01-29T16:26:17.768942747Z" level=info msg="StartContainer for \"06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6\" returns successfully"
Jan 29 16:26:17.780495 systemd[1]: cri-containerd-06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6.scope: Deactivated successfully.
Jan 29 16:26:18.223715 kubelet[1845]: E0129 16:26:18.223668    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:18.252780 containerd[1511]: time="2025-01-29T16:26:18.252687741Z" level=info msg="shim disconnected" id=06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6 namespace=k8s.io
Jan 29 16:26:18.252780 containerd[1511]: time="2025-01-29T16:26:18.252765289Z" level=warning msg="cleaning up after shim disconnected" id=06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6 namespace=k8s.io
Jan 29 16:26:18.252780 containerd[1511]: time="2025-01-29T16:26:18.252775278Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 29 16:26:18.673665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6-rootfs.mount: Deactivated successfully.
Jan 29 16:26:18.830605 containerd[1511]: time="2025-01-29T16:26:18.830551665Z" level=info msg="CreateContainer within sandbox \"bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Jan 29 16:26:18.868052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2800486243.mount: Deactivated successfully.
Jan 29 16:26:18.868662 containerd[1511]: time="2025-01-29T16:26:18.868610417Z" level=info msg="CreateContainer within sandbox \"bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6\""
Jan 29 16:26:18.870164 containerd[1511]: time="2025-01-29T16:26:18.869246603Z" level=info msg="StartContainer for \"2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6\""
Jan 29 16:26:18.915035 systemd[1]: Started cri-containerd-2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6.scope - libcontainer container 2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6.
Jan 29 16:26:18.987025 containerd[1511]: time="2025-01-29T16:26:18.986727066Z" level=info msg="StartContainer for \"2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6\" returns successfully"
Jan 29 16:26:18.997481 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 29 16:26:18.998494 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Jan 29 16:26:18.998753 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables...
Jan 29 16:26:19.010329 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 29 16:26:19.010606 systemd[1]: cri-containerd-2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6.scope: Deactivated successfully.
Jan 29 16:26:19.041393 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 29 16:26:19.155806 containerd[1511]: time="2025-01-29T16:26:19.155718957Z" level=info msg="shim disconnected" id=2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6 namespace=k8s.io
Jan 29 16:26:19.155806 containerd[1511]: time="2025-01-29T16:26:19.155788490Z" level=warning msg="cleaning up after shim disconnected" id=2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6 namespace=k8s.io
Jan 29 16:26:19.155806 containerd[1511]: time="2025-01-29T16:26:19.155797758Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 29 16:26:19.224891 kubelet[1845]: E0129 16:26:19.224848    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:19.674121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6-rootfs.mount: Deactivated successfully.
Jan 29 16:26:19.832305 containerd[1511]: time="2025-01-29T16:26:19.832257333Z" level=info msg="CreateContainer within sandbox \"bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Jan 29 16:26:19.858085 containerd[1511]: time="2025-01-29T16:26:19.857973405Z" level=info msg="CreateContainer within sandbox \"bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52\""
Jan 29 16:26:19.859025 containerd[1511]: time="2025-01-29T16:26:19.858994816Z" level=info msg="StartContainer for \"7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52\""
Jan 29 16:26:19.903259 systemd[1]: Started cri-containerd-7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52.scope - libcontainer container 7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52.
Jan 29 16:26:19.972011 containerd[1511]: time="2025-01-29T16:26:19.971585480Z" level=info msg="StartContainer for \"7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52\" returns successfully"
Jan 29 16:26:19.971784 systemd[1]: cri-containerd-7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52.scope: Deactivated successfully.
Jan 29 16:26:19.978631 containerd[1511]: time="2025-01-29T16:26:19.978582663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:26:19.979316 containerd[1511]: time="2025-01-29T16:26:19.979251440Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466"
Jan 29 16:26:19.980505 containerd[1511]: time="2025-01-29T16:26:19.980479376Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:26:19.982391 containerd[1511]: time="2025-01-29T16:26:19.982360118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:26:19.983436 containerd[1511]: time="2025-01-29T16:26:19.983048051Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 2.319486217s"
Jan 29 16:26:19.983436 containerd[1511]: time="2025-01-29T16:26:19.983073610Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\""
Jan 29 16:26:19.984905 containerd[1511]: time="2025-01-29T16:26:19.984883878Z" level=info msg="CreateContainer within sandbox \"8ba2293524caf8b1616a0c804aa2512d92866f059e7417ea70ce4fae520f20ec\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Jan 29 16:26:20.166807 containerd[1511]: time="2025-01-29T16:26:20.166745468Z" level=info msg="CreateContainer within sandbox \"8ba2293524caf8b1616a0c804aa2512d92866f059e7417ea70ce4fae520f20ec\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c190357d5ea1cd9290f58b47e12ab4faf7e390ec2d6672a9121890af561cfc10\""
Jan 29 16:26:20.167176 containerd[1511]: time="2025-01-29T16:26:20.167139799Z" level=info msg="StartContainer for \"c190357d5ea1cd9290f58b47e12ab4faf7e390ec2d6672a9121890af561cfc10\""
Jan 29 16:26:20.197049 systemd[1]: Started cri-containerd-c190357d5ea1cd9290f58b47e12ab4faf7e390ec2d6672a9121890af561cfc10.scope - libcontainer container c190357d5ea1cd9290f58b47e12ab4faf7e390ec2d6672a9121890af561cfc10.
Jan 29 16:26:20.225065 kubelet[1845]: E0129 16:26:20.224963    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:20.319703 containerd[1511]: time="2025-01-29T16:26:20.319660935Z" level=info msg="StartContainer for \"c190357d5ea1cd9290f58b47e12ab4faf7e390ec2d6672a9121890af561cfc10\" returns successfully"
Jan 29 16:26:20.321303 containerd[1511]: time="2025-01-29T16:26:20.321251560Z" level=info msg="shim disconnected" id=7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52 namespace=k8s.io
Jan 29 16:26:20.321367 containerd[1511]: time="2025-01-29T16:26:20.321302867Z" level=warning msg="cleaning up after shim disconnected" id=7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52 namespace=k8s.io
Jan 29 16:26:20.321367 containerd[1511]: time="2025-01-29T16:26:20.321316073Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 29 16:26:20.675238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52-rootfs.mount: Deactivated successfully.
Jan 29 16:26:20.835902 containerd[1511]: time="2025-01-29T16:26:20.835865839Z" level=info msg="CreateContainer within sandbox \"bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Jan 29 16:26:20.950965 kubelet[1845]: I0129 16:26:20.950789    1845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jpfxv" podStartSLOduration=9.89392984 podStartE2EDuration="23.95077431s" podCreationTimestamp="2025-01-29 16:25:57 +0000 UTC" firstStartedPulling="2025-01-29 16:26:05.926855296 +0000 UTC m=+9.854493774" lastFinishedPulling="2025-01-29 16:26:19.983699766 +0000 UTC m=+23.911338244" observedRunningTime="2025-01-29 16:26:20.950560883 +0000 UTC m=+24.878199371" watchObservedRunningTime="2025-01-29 16:26:20.95077431 +0000 UTC m=+24.878412798"
Jan 29 16:26:21.074957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2387319703.mount: Deactivated successfully.
Jan 29 16:26:21.084454 containerd[1511]: time="2025-01-29T16:26:21.084416252Z" level=info msg="CreateContainer within sandbox \"bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9\""
Jan 29 16:26:21.084946 containerd[1511]: time="2025-01-29T16:26:21.084873483Z" level=info msg="StartContainer for \"6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9\""
Jan 29 16:26:21.156037 systemd[1]: Started cri-containerd-6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9.scope - libcontainer container 6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9.
Jan 29 16:26:21.189076 systemd[1]: cri-containerd-6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9.scope: Deactivated successfully.
Jan 29 16:26:21.191108 containerd[1511]: time="2025-01-29T16:26:21.191078620Z" level=info msg="StartContainer for \"6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9\" returns successfully"
Jan 29 16:26:21.218121 containerd[1511]: time="2025-01-29T16:26:21.217989201Z" level=info msg="shim disconnected" id=6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9 namespace=k8s.io
Jan 29 16:26:21.218121 containerd[1511]: time="2025-01-29T16:26:21.218073653Z" level=warning msg="cleaning up after shim disconnected" id=6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9 namespace=k8s.io
Jan 29 16:26:21.218121 containerd[1511]: time="2025-01-29T16:26:21.218083651Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 29 16:26:21.226081 kubelet[1845]: E0129 16:26:21.226020    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:21.673993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9-rootfs.mount: Deactivated successfully.
Jan 29 16:26:21.840958 containerd[1511]: time="2025-01-29T16:26:21.840900674Z" level=info msg="CreateContainer within sandbox \"bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Jan 29 16:26:21.857756 containerd[1511]: time="2025-01-29T16:26:21.857706128Z" level=info msg="CreateContainer within sandbox \"bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25\""
Jan 29 16:26:21.858251 containerd[1511]: time="2025-01-29T16:26:21.858222413Z" level=info msg="StartContainer for \"9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25\""
Jan 29 16:26:21.940051 systemd[1]: Started cri-containerd-9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25.scope - libcontainer container 9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25.
Jan 29 16:26:21.976285 containerd[1511]: time="2025-01-29T16:26:21.976223238Z" level=info msg="StartContainer for \"9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25\" returns successfully"
Jan 29 16:26:22.073392 kubelet[1845]: I0129 16:26:22.073339    1845 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
Jan 29 16:26:22.227150 kubelet[1845]: E0129 16:26:22.227010    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:22.458983 kernel: Initializing XFRM netlink socket
Jan 29 16:26:22.884934 kubelet[1845]: I0129 16:26:22.883271    1845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fldkk" podStartSLOduration=14.142561367 podStartE2EDuration="25.883249004s" podCreationTimestamp="2025-01-29 16:25:57 +0000 UTC" firstStartedPulling="2025-01-29 16:26:05.92243699 +0000 UTC m=+9.850075468" lastFinishedPulling="2025-01-29 16:26:17.663124627 +0000 UTC m=+21.590763105" observedRunningTime="2025-01-29 16:26:22.8830732 +0000 UTC m=+26.810711698" watchObservedRunningTime="2025-01-29 16:26:22.883249004 +0000 UTC m=+26.810887482"
Jan 29 16:26:23.228098 kubelet[1845]: E0129 16:26:23.227942    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:24.205250 systemd-networkd[1445]: cilium_host: Link UP
Jan 29 16:26:24.205415 systemd-networkd[1445]: cilium_net: Link UP
Jan 29 16:26:24.205583 systemd-networkd[1445]: cilium_net: Gained carrier
Jan 29 16:26:24.205745 systemd-networkd[1445]: cilium_host: Gained carrier
Jan 29 16:26:24.228127 kubelet[1845]: E0129 16:26:24.228083    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:24.237259 systemd-networkd[1445]: cilium_host: Gained IPv6LL
Jan 29 16:26:24.283317 systemd-networkd[1445]: cilium_net: Gained IPv6LL
Jan 29 16:26:24.311023 systemd-networkd[1445]: cilium_vxlan: Link UP
Jan 29 16:26:24.311032 systemd-networkd[1445]: cilium_vxlan: Gained carrier
Jan 29 16:26:24.626274 systemd[1]: Created slice kubepods-besteffort-podd2f2c68c_be29_4f22_8c3e_947a6c43ec1f.slice - libcontainer container kubepods-besteffort-podd2f2c68c_be29_4f22_8c3e_947a6c43ec1f.slice.
Jan 29 16:26:24.696979 kubelet[1845]: I0129 16:26:24.696904    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czq6h\" (UniqueName: \"kubernetes.io/projected/d2f2c68c-be29-4f22-8c3e-947a6c43ec1f-kube-api-access-czq6h\") pod \"nginx-deployment-7fcdb87857-lw46c\" (UID: \"d2f2c68c-be29-4f22-8c3e-947a6c43ec1f\") " pod="default/nginx-deployment-7fcdb87857-lw46c"
Jan 29 16:26:24.704952 kernel: NET: Registered PF_ALG protocol family
Jan 29 16:26:24.930336 containerd[1511]: time="2025-01-29T16:26:24.930155398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lw46c,Uid:d2f2c68c-be29-4f22-8c3e-947a6c43ec1f,Namespace:default,Attempt:0,}"
Jan 29 16:26:25.229198 kubelet[1845]: E0129 16:26:25.229056    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:25.555573 systemd-networkd[1445]: lxc_health: Link UP
Jan 29 16:26:25.557956 systemd-networkd[1445]: lxc_health: Gained carrier
Jan 29 16:26:26.116358 systemd-networkd[1445]: lxc9abda3952a61: Link UP
Jan 29 16:26:26.125353 kernel: eth0: renamed from tmp147e1
Jan 29 16:26:26.131221 systemd-networkd[1445]: lxc9abda3952a61: Gained carrier
Jan 29 16:26:26.229720 kubelet[1845]: E0129 16:26:26.229668    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:26.307073 systemd-networkd[1445]: cilium_vxlan: Gained IPv6LL
Jan 29 16:26:26.691098 systemd-networkd[1445]: lxc_health: Gained IPv6LL
Jan 29 16:26:27.201419 update_engine[1495]: I20250129 16:26:27.201332  1495 update_attempter.cc:509] Updating boot flags...
Jan 29 16:26:27.228685 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2267)
Jan 29 16:26:27.230283 kubelet[1845]: E0129 16:26:27.230219    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:27.292755 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2267)
Jan 29 16:26:27.320934 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2267)
Jan 29 16:26:27.395178 systemd-networkd[1445]: lxc9abda3952a61: Gained IPv6LL
Jan 29 16:26:28.230996 kubelet[1845]: E0129 16:26:28.230926    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:29.130442 containerd[1511]: time="2025-01-29T16:26:29.129756794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 29 16:26:29.130442 containerd[1511]: time="2025-01-29T16:26:29.130402847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 29 16:26:29.130442 containerd[1511]: time="2025-01-29T16:26:29.130417925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:26:29.130866 containerd[1511]: time="2025-01-29T16:26:29.130501614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:26:29.160046 systemd[1]: Started cri-containerd-147e1080a140a6203432cbe3ace65799eaa42bc3bae295f104d397382d52133c.scope - libcontainer container 147e1080a140a6203432cbe3ace65799eaa42bc3bae295f104d397382d52133c.
Jan 29 16:26:29.171000 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 29 16:26:29.194427 containerd[1511]: time="2025-01-29T16:26:29.194397275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lw46c,Uid:d2f2c68c-be29-4f22-8c3e-947a6c43ec1f,Namespace:default,Attempt:0,} returns sandbox id \"147e1080a140a6203432cbe3ace65799eaa42bc3bae295f104d397382d52133c\""
Jan 29 16:26:29.195560 containerd[1511]: time="2025-01-29T16:26:29.195532646Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Jan 29 16:26:29.231284 kubelet[1845]: E0129 16:26:29.231251    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:30.232400 kubelet[1845]: E0129 16:26:30.232351    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:31.232587 kubelet[1845]: E0129 16:26:31.232523    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:32.048253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1797436461.mount: Deactivated successfully.
Jan 29 16:26:32.232846 kubelet[1845]: E0129 16:26:32.232756    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:33.233551 kubelet[1845]: E0129 16:26:33.233505    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:33.381518 containerd[1511]: time="2025-01-29T16:26:33.379630995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:26:33.382006 containerd[1511]: time="2025-01-29T16:26:33.381960125Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561"
Jan 29 16:26:33.383018 containerd[1511]: time="2025-01-29T16:26:33.382990963Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:26:33.385231 containerd[1511]: time="2025-01-29T16:26:33.385208953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:26:33.386118 containerd[1511]: time="2025-01-29T16:26:33.386088955Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 4.190520783s"
Jan 29 16:26:33.386156 containerd[1511]: time="2025-01-29T16:26:33.386120214Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\""
Jan 29 16:26:33.387881 containerd[1511]: time="2025-01-29T16:26:33.387849312Z" level=info msg="CreateContainer within sandbox \"147e1080a140a6203432cbe3ace65799eaa42bc3bae295f104d397382d52133c\" for container &ContainerMetadata{Name:nginx,Attempt:0,}"
Jan 29 16:26:33.399714 containerd[1511]: time="2025-01-29T16:26:33.399679502Z" level=info msg="CreateContainer within sandbox \"147e1080a140a6203432cbe3ace65799eaa42bc3bae295f104d397382d52133c\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"3c9955c11222285de5f105d8a4959226c60d7db69ab1c6a20e7273fb4b9d4204\""
Jan 29 16:26:33.400177 containerd[1511]: time="2025-01-29T16:26:33.400143528Z" level=info msg="StartContainer for \"3c9955c11222285de5f105d8a4959226c60d7db69ab1c6a20e7273fb4b9d4204\""
Jan 29 16:26:33.400745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2170363965.mount: Deactivated successfully.
Jan 29 16:26:33.607041 systemd[1]: Started cri-containerd-3c9955c11222285de5f105d8a4959226c60d7db69ab1c6a20e7273fb4b9d4204.scope - libcontainer container 3c9955c11222285de5f105d8a4959226c60d7db69ab1c6a20e7273fb4b9d4204.
Jan 29 16:26:33.654992 containerd[1511]: time="2025-01-29T16:26:33.654954525Z" level=info msg="StartContainer for \"3c9955c11222285de5f105d8a4959226c60d7db69ab1c6a20e7273fb4b9d4204\" returns successfully"
Jan 29 16:26:33.869557 kubelet[1845]: I0129 16:26:33.869416    1845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-lw46c" podStartSLOduration=5.677557707 podStartE2EDuration="9.869399067s" podCreationTimestamp="2025-01-29 16:26:24 +0000 UTC" firstStartedPulling="2025-01-29 16:26:29.195102952 +0000 UTC m=+33.122741430" lastFinishedPulling="2025-01-29 16:26:33.386944312 +0000 UTC m=+37.314582790" observedRunningTime="2025-01-29 16:26:33.869043775 +0000 UTC m=+37.796682274" watchObservedRunningTime="2025-01-29 16:26:33.869399067 +0000 UTC m=+37.797037545"
Jan 29 16:26:34.234457 kubelet[1845]: E0129 16:26:34.234321    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:35.234863 kubelet[1845]: E0129 16:26:35.234775    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:36.235523 kubelet[1845]: E0129 16:26:36.235454    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:36.657722 systemd[1]: Created slice kubepods-besteffort-pod32773409_8409_4871_acae_46671ee9f43a.slice - libcontainer container kubepods-besteffort-pod32773409_8409_4871_acae_46671ee9f43a.slice.
Jan 29 16:26:36.787558 kubelet[1845]: I0129 16:26:36.787487    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5fhl\" (UniqueName: \"kubernetes.io/projected/32773409-8409-4871-acae-46671ee9f43a-kube-api-access-v5fhl\") pod \"nfs-server-provisioner-0\" (UID: \"32773409-8409-4871-acae-46671ee9f43a\") " pod="default/nfs-server-provisioner-0"
Jan 29 16:26:36.787558 kubelet[1845]: I0129 16:26:36.787551    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/32773409-8409-4871-acae-46671ee9f43a-data\") pod \"nfs-server-provisioner-0\" (UID: \"32773409-8409-4871-acae-46671ee9f43a\") " pod="default/nfs-server-provisioner-0"
Jan 29 16:26:36.961066 containerd[1511]: time="2025-01-29T16:26:36.960939560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:32773409-8409-4871-acae-46671ee9f43a,Namespace:default,Attempt:0,}"
Jan 29 16:26:37.008997 systemd-networkd[1445]: lxcd5fc775c93bf: Link UP
Jan 29 16:26:37.017009 kernel: eth0: renamed from tmpbaca6
Jan 29 16:26:37.021136 systemd-networkd[1445]: lxcd5fc775c93bf: Gained carrier
Jan 29 16:26:37.208239 kubelet[1845]: E0129 16:26:37.208169    1845 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:37.235872 kubelet[1845]: E0129 16:26:37.235794    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:37.260003 containerd[1511]: time="2025-01-29T16:26:37.259589453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 29 16:26:37.260003 containerd[1511]: time="2025-01-29T16:26:37.259664394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 29 16:26:37.260003 containerd[1511]: time="2025-01-29T16:26:37.259678160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:26:37.260217 containerd[1511]: time="2025-01-29T16:26:37.259891393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:26:37.297073 systemd[1]: Started cri-containerd-baca6ef3471bb83fc39a06a739a7505c57560b22f60911ea3425695922fd6160.scope - libcontainer container baca6ef3471bb83fc39a06a739a7505c57560b22f60911ea3425695922fd6160.
Jan 29 16:26:37.309263 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 29 16:26:37.332064 containerd[1511]: time="2025-01-29T16:26:37.332028963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:32773409-8409-4871-acae-46671ee9f43a,Namespace:default,Attempt:0,} returns sandbox id \"baca6ef3471bb83fc39a06a739a7505c57560b22f60911ea3425695922fd6160\""
Jan 29 16:26:37.333534 containerd[1511]: time="2025-01-29T16:26:37.333452649Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\""
Jan 29 16:26:38.236962 kubelet[1845]: E0129 16:26:38.236847    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:38.276581 systemd-networkd[1445]: lxcd5fc775c93bf: Gained IPv6LL
Jan 29 16:26:39.238076 kubelet[1845]: E0129 16:26:39.238024    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:40.238887 kubelet[1845]: E0129 16:26:40.238839    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:40.759598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2420981958.mount: Deactivated successfully.
Jan 29 16:26:41.239508 kubelet[1845]: E0129 16:26:41.239461    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:42.240234 kubelet[1845]: E0129 16:26:42.240126    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:42.833227 containerd[1511]: time="2025-01-29T16:26:42.833181748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:26:42.834371 containerd[1511]: time="2025-01-29T16:26:42.834327746Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406"
Jan 29 16:26:42.836027 containerd[1511]: time="2025-01-29T16:26:42.835997270Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:26:42.838881 containerd[1511]: time="2025-01-29T16:26:42.838826487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:26:42.840021 containerd[1511]: time="2025-01-29T16:26:42.839977295Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.506482286s"
Jan 29 16:26:42.840021 containerd[1511]: time="2025-01-29T16:26:42.840018692Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\""
Jan 29 16:26:42.845105 containerd[1511]: time="2025-01-29T16:26:42.845064445Z" level=info msg="CreateContainer within sandbox \"baca6ef3471bb83fc39a06a739a7505c57560b22f60911ea3425695922fd6160\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}"
Jan 29 16:26:42.856045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount391069473.mount: Deactivated successfully.
Jan 29 16:26:42.859112 containerd[1511]: time="2025-01-29T16:26:42.859077623Z" level=info msg="CreateContainer within sandbox \"baca6ef3471bb83fc39a06a739a7505c57560b22f60911ea3425695922fd6160\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a38ee2cbad3951f062342c46a86a6c85fda6545368bbb2e90373c8f01a45c6e1\""
Jan 29 16:26:42.859445 containerd[1511]: time="2025-01-29T16:26:42.859423615Z" level=info msg="StartContainer for \"a38ee2cbad3951f062342c46a86a6c85fda6545368bbb2e90373c8f01a45c6e1\""
Jan 29 16:26:42.945134 systemd[1]: Started cri-containerd-a38ee2cbad3951f062342c46a86a6c85fda6545368bbb2e90373c8f01a45c6e1.scope - libcontainer container a38ee2cbad3951f062342c46a86a6c85fda6545368bbb2e90373c8f01a45c6e1.
Jan 29 16:26:43.220348 containerd[1511]: time="2025-01-29T16:26:43.220187504Z" level=info msg="StartContainer for \"a38ee2cbad3951f062342c46a86a6c85fda6545368bbb2e90373c8f01a45c6e1\" returns successfully"
Jan 29 16:26:43.241336 kubelet[1845]: E0129 16:26:43.241261    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:44.241778 kubelet[1845]: E0129 16:26:44.241733    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:44.297458 kubelet[1845]: I0129 16:26:44.297400    1845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.789849479 podStartE2EDuration="8.297384078s" podCreationTimestamp="2025-01-29 16:26:36 +0000 UTC" firstStartedPulling="2025-01-29 16:26:37.33324147 +0000 UTC m=+41.260879948" lastFinishedPulling="2025-01-29 16:26:42.840776069 +0000 UTC m=+46.768414547" observedRunningTime="2025-01-29 16:26:44.297282758 +0000 UTC m=+48.224921246" watchObservedRunningTime="2025-01-29 16:26:44.297384078 +0000 UTC m=+48.225022556"
Jan 29 16:26:45.242254 kubelet[1845]: E0129 16:26:45.242196    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:46.242595 kubelet[1845]: E0129 16:26:46.242543    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:47.243094 kubelet[1845]: E0129 16:26:47.243042    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:48.243528 kubelet[1845]: E0129 16:26:48.243483    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:49.244227 kubelet[1845]: E0129 16:26:49.244151    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:50.244975 kubelet[1845]: E0129 16:26:50.244904    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:51.246628 kubelet[1845]: E0129 16:26:51.246506    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:52.247342 kubelet[1845]: E0129 16:26:52.247290    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:52.848544 systemd[1]: Created slice kubepods-besteffort-podc4d3dbfc_a4d9_490d_abca_31c6b20623e5.slice - libcontainer container kubepods-besteffort-podc4d3dbfc_a4d9_490d_abca_31c6b20623e5.slice.
Jan 29 16:26:52.957814 kubelet[1845]: I0129 16:26:52.957749    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-721d7034-9d08-4972-8c65-f790b849879e\" (UniqueName: \"kubernetes.io/nfs/c4d3dbfc-a4d9-490d-abca-31c6b20623e5-pvc-721d7034-9d08-4972-8c65-f790b849879e\") pod \"test-pod-1\" (UID: \"c4d3dbfc-a4d9-490d-abca-31c6b20623e5\") " pod="default/test-pod-1"
Jan 29 16:26:52.957814 kubelet[1845]: I0129 16:26:52.957805    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtpjn\" (UniqueName: \"kubernetes.io/projected/c4d3dbfc-a4d9-490d-abca-31c6b20623e5-kube-api-access-wtpjn\") pod \"test-pod-1\" (UID: \"c4d3dbfc-a4d9-490d-abca-31c6b20623e5\") " pod="default/test-pod-1"
Jan 29 16:26:53.242949 kernel: FS-Cache: Loaded
Jan 29 16:26:53.248205 kubelet[1845]: E0129 16:26:53.248156    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:53.309369 kernel: RPC: Registered named UNIX socket transport module.
Jan 29 16:26:53.309492 kernel: RPC: Registered udp transport module.
Jan 29 16:26:53.309513 kernel: RPC: Registered tcp transport module.
Jan 29 16:26:53.309948 kernel: RPC: Registered tcp-with-tls transport module.
Jan 29 16:26:53.311389 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 29 16:26:53.528287 kernel: NFS: Registering the id_resolver key type
Jan 29 16:26:53.528407 kernel: Key type id_resolver registered
Jan 29 16:26:53.528427 kernel: Key type id_legacy registered
Jan 29 16:26:53.555135 nfsidmap[3244]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain'
Jan 29 16:26:53.558049 nfsidmap[3245]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain'
Jan 29 16:26:53.752755 containerd[1511]: time="2025-01-29T16:26:53.752690005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c4d3dbfc-a4d9-490d-abca-31c6b20623e5,Namespace:default,Attempt:0,}"
Jan 29 16:26:53.813294 kernel: eth0: renamed from tmp56c43
Jan 29 16:26:53.819957 systemd-networkd[1445]: lxc47249dc174a6: Link UP
Jan 29 16:26:53.821627 systemd-networkd[1445]: lxc47249dc174a6: Gained carrier
Jan 29 16:26:54.017989 containerd[1511]: time="2025-01-29T16:26:54.017892227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 29 16:26:54.017989 containerd[1511]: time="2025-01-29T16:26:54.017964272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 29 16:26:54.017989 containerd[1511]: time="2025-01-29T16:26:54.017974200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:26:54.018220 containerd[1511]: time="2025-01-29T16:26:54.018042159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:26:54.041093 systemd[1]: Started cri-containerd-56c4364f29f9528fb1e778686b451102e4e0f7f16000374756523a2038784790.scope - libcontainer container 56c4364f29f9528fb1e778686b451102e4e0f7f16000374756523a2038784790.
Jan 29 16:26:54.052666 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 29 16:26:54.075701 containerd[1511]: time="2025-01-29T16:26:54.075595549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c4d3dbfc-a4d9-490d-abca-31c6b20623e5,Namespace:default,Attempt:0,} returns sandbox id \"56c4364f29f9528fb1e778686b451102e4e0f7f16000374756523a2038784790\""
Jan 29 16:26:54.076701 containerd[1511]: time="2025-01-29T16:26:54.076676840Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Jan 29 16:26:54.249235 kubelet[1845]: E0129 16:26:54.249193    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:54.490600 containerd[1511]: time="2025-01-29T16:26:54.490540271Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:26:54.491632 containerd[1511]: time="2025-01-29T16:26:54.491591477Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61"
Jan 29 16:26:54.494470 containerd[1511]: time="2025-01-29T16:26:54.494434278Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 417.727651ms"
Jan 29 16:26:54.494470 containerd[1511]: time="2025-01-29T16:26:54.494465757Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\""
Jan 29 16:26:54.496297 containerd[1511]: time="2025-01-29T16:26:54.496272242Z" level=info msg="CreateContainer within sandbox \"56c4364f29f9528fb1e778686b451102e4e0f7f16000374756523a2038784790\" for container &ContainerMetadata{Name:test,Attempt:0,}"
Jan 29 16:26:54.510683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3460901820.mount: Deactivated successfully.
Jan 29 16:26:54.514283 containerd[1511]: time="2025-01-29T16:26:54.514227668Z" level=info msg="CreateContainer within sandbox \"56c4364f29f9528fb1e778686b451102e4e0f7f16000374756523a2038784790\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"f74b150eb38e924f6be7bfe1ce5327c08a13fdfe874d95392252d7fdcbfc8dc9\""
Jan 29 16:26:54.514798 containerd[1511]: time="2025-01-29T16:26:54.514759969Z" level=info msg="StartContainer for \"f74b150eb38e924f6be7bfe1ce5327c08a13fdfe874d95392252d7fdcbfc8dc9\""
Jan 29 16:26:54.545375 systemd[1]: Started cri-containerd-f74b150eb38e924f6be7bfe1ce5327c08a13fdfe874d95392252d7fdcbfc8dc9.scope - libcontainer container f74b150eb38e924f6be7bfe1ce5327c08a13fdfe874d95392252d7fdcbfc8dc9.
Jan 29 16:26:54.575243 containerd[1511]: time="2025-01-29T16:26:54.575178693Z" level=info msg="StartContainer for \"f74b150eb38e924f6be7bfe1ce5327c08a13fdfe874d95392252d7fdcbfc8dc9\" returns successfully"
Jan 29 16:26:55.249632 kubelet[1845]: E0129 16:26:55.249595    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:55.875110 systemd-networkd[1445]: lxc47249dc174a6: Gained IPv6LL
Jan 29 16:26:56.250598 kubelet[1845]: E0129 16:26:56.250566    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:57.207488 kubelet[1845]: E0129 16:26:57.207433    1845 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:57.250819 kubelet[1845]: E0129 16:26:57.250771    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:58.250946 kubelet[1845]: E0129 16:26:58.250868    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:59.251409 kubelet[1845]: E0129 16:26:59.251328    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:26:59.599515 kubelet[1845]: I0129 16:26:59.599342    1845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=23.180658928 podStartE2EDuration="23.599323621s" podCreationTimestamp="2025-01-29 16:26:36 +0000 UTC" firstStartedPulling="2025-01-29 16:26:54.076428263 +0000 UTC m=+58.004066731" lastFinishedPulling="2025-01-29 16:26:54.495092946 +0000 UTC m=+58.422731424" observedRunningTime="2025-01-29 16:26:55.255433497 +0000 UTC m=+59.183071975" watchObservedRunningTime="2025-01-29 16:26:59.599323621 +0000 UTC m=+63.526962099"
Jan 29 16:26:59.631724 containerd[1511]: time="2025-01-29T16:26:59.631678342Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 29 16:26:59.639119 containerd[1511]: time="2025-01-29T16:26:59.639089390Z" level=info msg="StopContainer for \"9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25\" with timeout 2 (s)"
Jan 29 16:26:59.639329 containerd[1511]: time="2025-01-29T16:26:59.639301337Z" level=info msg="Stop container \"9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25\" with signal terminated"
Jan 29 16:26:59.646269 systemd-networkd[1445]: lxc_health: Link DOWN
Jan 29 16:26:59.646279 systemd-networkd[1445]: lxc_health: Lost carrier
Jan 29 16:26:59.666400 systemd[1]: cri-containerd-9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25.scope: Deactivated successfully.
Jan 29 16:26:59.666846 systemd[1]: cri-containerd-9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25.scope: Consumed 7.749s CPU time, 128.3M memory peak, 228K read from disk, 13.3M written to disk.
Jan 29 16:26:59.688715 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25-rootfs.mount: Deactivated successfully.
Jan 29 16:26:59.696077 containerd[1511]: time="2025-01-29T16:26:59.696013340Z" level=info msg="shim disconnected" id=9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25 namespace=k8s.io
Jan 29 16:26:59.696077 containerd[1511]: time="2025-01-29T16:26:59.696073824Z" level=warning msg="cleaning up after shim disconnected" id=9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25 namespace=k8s.io
Jan 29 16:26:59.696077 containerd[1511]: time="2025-01-29T16:26:59.696084133Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 29 16:26:59.713096 containerd[1511]: time="2025-01-29T16:26:59.713044765Z" level=info msg="StopContainer for \"9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25\" returns successfully"
Jan 29 16:26:59.713729 containerd[1511]: time="2025-01-29T16:26:59.713700075Z" level=info msg="StopPodSandbox for \"bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f\""
Jan 29 16:26:59.713800 containerd[1511]: time="2025-01-29T16:26:59.713755190Z" level=info msg="Container to stop \"06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 29 16:26:59.713800 containerd[1511]: time="2025-01-29T16:26:59.713794604Z" level=info msg="Container to stop \"6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 29 16:26:59.713872 containerd[1511]: time="2025-01-29T16:26:59.713803861Z" level=info msg="Container to stop \"9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 29 16:26:59.713872 containerd[1511]: time="2025-01-29T16:26:59.713812677Z" level=info msg="Container to stop \"2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 29 16:26:59.713872 containerd[1511]: time="2025-01-29T16:26:59.713822566Z" level=info msg="Container to stop \"7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 29 16:26:59.716425 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f-shm.mount: Deactivated successfully.
Jan 29 16:26:59.721000 systemd[1]: cri-containerd-bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f.scope: Deactivated successfully.
Jan 29 16:26:59.743924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f-rootfs.mount: Deactivated successfully.
Jan 29 16:26:59.748356 containerd[1511]: time="2025-01-29T16:26:59.748277651Z" level=info msg="shim disconnected" id=bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f namespace=k8s.io
Jan 29 16:26:59.748356 containerd[1511]: time="2025-01-29T16:26:59.748341832Z" level=warning msg="cleaning up after shim disconnected" id=bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f namespace=k8s.io
Jan 29 16:26:59.748356 containerd[1511]: time="2025-01-29T16:26:59.748356920Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 29 16:26:59.762633 containerd[1511]: time="2025-01-29T16:26:59.762583246Z" level=info msg="TearDown network for sandbox \"bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f\" successfully"
Jan 29 16:26:59.762633 containerd[1511]: time="2025-01-29T16:26:59.762619304Z" level=info msg="StopPodSandbox for \"bab1c16a0fc8bd9e260cd4180de4cb887092e5d956826faf182a9c7a0ab8bf5f\" returns successfully"
Jan 29 16:26:59.899649 kubelet[1845]: I0129 16:26:59.899488    1845 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-clustermesh-secrets\") pod \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") "
Jan 29 16:26:59.899649 kubelet[1845]: I0129 16:26:59.899554    1845 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-host-proc-sys-kernel\") pod \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") "
Jan 29 16:26:59.899649 kubelet[1845]: I0129 16:26:59.899580    1845 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-cilium-run\") pod \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") "
Jan 29 16:26:59.899649 kubelet[1845]: I0129 16:26:59.899601    1845 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-hubble-tls\") pod \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") "
Jan 29 16:26:59.899649 kubelet[1845]: I0129 16:26:59.899616    1845 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9tp2\" (UniqueName: \"kubernetes.io/projected/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-kube-api-access-r9tp2\") pod \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") "
Jan 29 16:26:59.899649 kubelet[1845]: I0129 16:26:59.899629    1845 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-etc-cni-netd\") pod \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") "
Jan 29 16:26:59.900001 kubelet[1845]: I0129 16:26:59.899642    1845 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-hostproc\") pod \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") "
Jan 29 16:26:59.900001 kubelet[1845]: I0129 16:26:59.899654    1845 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-lib-modules\") pod \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") "
Jan 29 16:26:59.900001 kubelet[1845]: I0129 16:26:59.899676    1845 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-cni-path\") pod \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") "
Jan 29 16:26:59.900001 kubelet[1845]: I0129 16:26:59.899690    1845 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-xtables-lock\") pod \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") "
Jan 29 16:26:59.900001 kubelet[1845]: I0129 16:26:59.899702    1845 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-bpf-maps\") pod \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") "
Jan 29 16:26:59.900001 kubelet[1845]: I0129 16:26:59.899717    1845 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-cilium-cgroup\") pod \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") "
Jan 29 16:26:59.900154 kubelet[1845]: I0129 16:26:59.899743    1845 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-cilium-config-path\") pod \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") "
Jan 29 16:26:59.900154 kubelet[1845]: I0129 16:26:59.899757    1845 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-host-proc-sys-net\") pod \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\" (UID: \"4c8b48e7-e1fe-428d-96e5-4c39db533bf5\") "
Jan 29 16:26:59.900154 kubelet[1845]: I0129 16:26:59.899654    1845 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4c8b48e7-e1fe-428d-96e5-4c39db533bf5" (UID: "4c8b48e7-e1fe-428d-96e5-4c39db533bf5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Jan 29 16:26:59.900154 kubelet[1845]: I0129 16:26:59.899808    1845 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4c8b48e7-e1fe-428d-96e5-4c39db533bf5" (UID: "4c8b48e7-e1fe-428d-96e5-4c39db533bf5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Jan 29 16:26:59.900154 kubelet[1845]: I0129 16:26:59.899819    1845 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4c8b48e7-e1fe-428d-96e5-4c39db533bf5" (UID: "4c8b48e7-e1fe-428d-96e5-4c39db533bf5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Jan 29 16:26:59.900269 kubelet[1845]: I0129 16:26:59.899830    1845 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-hostproc" (OuterVolumeSpecName: "hostproc") pod "4c8b48e7-e1fe-428d-96e5-4c39db533bf5" (UID: "4c8b48e7-e1fe-428d-96e5-4c39db533bf5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Jan 29 16:26:59.900269 kubelet[1845]: I0129 16:26:59.899839    1845 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4c8b48e7-e1fe-428d-96e5-4c39db533bf5" (UID: "4c8b48e7-e1fe-428d-96e5-4c39db533bf5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Jan 29 16:26:59.900269 kubelet[1845]: I0129 16:26:59.899850    1845 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-cni-path" (OuterVolumeSpecName: "cni-path") pod "4c8b48e7-e1fe-428d-96e5-4c39db533bf5" (UID: "4c8b48e7-e1fe-428d-96e5-4c39db533bf5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Jan 29 16:26:59.900269 kubelet[1845]: I0129 16:26:59.899933    1845 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4c8b48e7-e1fe-428d-96e5-4c39db533bf5" (UID: "4c8b48e7-e1fe-428d-96e5-4c39db533bf5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Jan 29 16:26:59.900269 kubelet[1845]: I0129 16:26:59.899905    1845 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4c8b48e7-e1fe-428d-96e5-4c39db533bf5" (UID: "4c8b48e7-e1fe-428d-96e5-4c39db533bf5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Jan 29 16:26:59.900391 kubelet[1845]: I0129 16:26:59.899950    1845 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4c8b48e7-e1fe-428d-96e5-4c39db533bf5" (UID: "4c8b48e7-e1fe-428d-96e5-4c39db533bf5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Jan 29 16:26:59.900391 kubelet[1845]: I0129 16:26:59.899991    1845 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4c8b48e7-e1fe-428d-96e5-4c39db533bf5" (UID: "4c8b48e7-e1fe-428d-96e5-4c39db533bf5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Jan 29 16:26:59.903819 kubelet[1845]: I0129 16:26:59.903271    1845 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4c8b48e7-e1fe-428d-96e5-4c39db533bf5" (UID: "4c8b48e7-e1fe-428d-96e5-4c39db533bf5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
Jan 29 16:26:59.904648 systemd[1]: var-lib-kubelet-pods-4c8b48e7\x2de1fe\x2d428d\x2d96e5\x2d4c39db533bf5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Jan 29 16:26:59.906062 kubelet[1845]: I0129 16:26:59.906018    1845 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4c8b48e7-e1fe-428d-96e5-4c39db533bf5" (UID: "4c8b48e7-e1fe-428d-96e5-4c39db533bf5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue ""
Jan 29 16:26:59.906167 kubelet[1845]: I0129 16:26:59.906137    1845 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-kube-api-access-r9tp2" (OuterVolumeSpecName: "kube-api-access-r9tp2") pod "4c8b48e7-e1fe-428d-96e5-4c39db533bf5" (UID: "4c8b48e7-e1fe-428d-96e5-4c39db533bf5"). InnerVolumeSpecName "kube-api-access-r9tp2". PluginName "kubernetes.io/projected", VolumeGIDValue ""
Jan 29 16:26:59.906585 kubelet[1845]: I0129 16:26:59.906545    1845 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4c8b48e7-e1fe-428d-96e5-4c39db533bf5" (UID: "4c8b48e7-e1fe-428d-96e5-4c39db533bf5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue ""
Jan 29 16:27:00.000337 kubelet[1845]: I0129 16:27:00.000271    1845 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-bpf-maps\") on node \"10.0.0.148\" DevicePath \"\""
Jan 29 16:27:00.000337 kubelet[1845]: I0129 16:27:00.000317    1845 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-cni-path\") on node \"10.0.0.148\" DevicePath \"\""
Jan 29 16:27:00.000337 kubelet[1845]: I0129 16:27:00.000326    1845 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-xtables-lock\") on node \"10.0.0.148\" DevicePath \"\""
Jan 29 16:27:00.000337 kubelet[1845]: I0129 16:27:00.000336    1845 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-cilium-config-path\") on node \"10.0.0.148\" DevicePath \"\""
Jan 29 16:27:00.000337 kubelet[1845]: I0129 16:27:00.000349    1845 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-host-proc-sys-net\") on node \"10.0.0.148\" DevicePath \"\""
Jan 29 16:27:00.000337 kubelet[1845]: I0129 16:27:00.000359    1845 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-cilium-cgroup\") on node \"10.0.0.148\" DevicePath \"\""
Jan 29 16:27:00.000337 kubelet[1845]: I0129 16:27:00.000367    1845 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-host-proc-sys-kernel\") on node \"10.0.0.148\" DevicePath \"\""
Jan 29 16:27:00.000747 kubelet[1845]: I0129 16:27:00.000376    1845 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-clustermesh-secrets\") on node \"10.0.0.148\" DevicePath \"\""
Jan 29 16:27:00.000747 kubelet[1845]: I0129 16:27:00.000384    1845 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-etc-cni-netd\") on node \"10.0.0.148\" DevicePath \"\""
Jan 29 16:27:00.000747 kubelet[1845]: I0129 16:27:00.000391    1845 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-hostproc\") on node \"10.0.0.148\" DevicePath \"\""
Jan 29 16:27:00.000747 kubelet[1845]: I0129 16:27:00.000399    1845 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-lib-modules\") on node \"10.0.0.148\" DevicePath \"\""
Jan 29 16:27:00.000747 kubelet[1845]: I0129 16:27:00.000406    1845 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-cilium-run\") on node \"10.0.0.148\" DevicePath \"\""
Jan 29 16:27:00.000747 kubelet[1845]: I0129 16:27:00.000415    1845 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-hubble-tls\") on node \"10.0.0.148\" DevicePath \"\""
Jan 29 16:27:00.000747 kubelet[1845]: I0129 16:27:00.000423    1845 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r9tp2\" (UniqueName: \"kubernetes.io/projected/4c8b48e7-e1fe-428d-96e5-4c39db533bf5-kube-api-access-r9tp2\") on node \"10.0.0.148\" DevicePath \"\""
Jan 29 16:27:00.252284 kubelet[1845]: E0129 16:27:00.252205    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:27:00.257172 kubelet[1845]: I0129 16:27:00.257142    1845 scope.go:117] "RemoveContainer" containerID="9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25"
Jan 29 16:27:00.258215 containerd[1511]: time="2025-01-29T16:27:00.258187519Z" level=info msg="RemoveContainer for \"9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25\""
Jan 29 16:27:00.261895 containerd[1511]: time="2025-01-29T16:27:00.261863613Z" level=info msg="RemoveContainer for \"9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25\" returns successfully"
Jan 29 16:27:00.261995 systemd[1]: Removed slice kubepods-burstable-pod4c8b48e7_e1fe_428d_96e5_4c39db533bf5.slice - libcontainer container kubepods-burstable-pod4c8b48e7_e1fe_428d_96e5_4c39db533bf5.slice.
Jan 29 16:27:00.262160 kubelet[1845]: I0129 16:27:00.262042    1845 scope.go:117] "RemoveContainer" containerID="6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9"
Jan 29 16:27:00.262097 systemd[1]: kubepods-burstable-pod4c8b48e7_e1fe_428d_96e5_4c39db533bf5.slice: Consumed 7.897s CPU time, 128.8M memory peak, 228K read from disk, 13.3M written to disk.
Jan 29 16:27:00.263057 containerd[1511]: time="2025-01-29T16:27:00.262976843Z" level=info msg="RemoveContainer for \"6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9\""
Jan 29 16:27:00.266266 containerd[1511]: time="2025-01-29T16:27:00.266237415Z" level=info msg="RemoveContainer for \"6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9\" returns successfully"
Jan 29 16:27:00.266388 kubelet[1845]: I0129 16:27:00.266360    1845 scope.go:117] "RemoveContainer" containerID="7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52"
Jan 29 16:27:00.267112 containerd[1511]: time="2025-01-29T16:27:00.267084506Z" level=info msg="RemoveContainer for \"7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52\""
Jan 29 16:27:00.273872 containerd[1511]: time="2025-01-29T16:27:00.273830915Z" level=info msg="RemoveContainer for \"7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52\" returns successfully"
Jan 29 16:27:00.274000 kubelet[1845]: I0129 16:27:00.273948    1845 scope.go:117] "RemoveContainer" containerID="2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6"
Jan 29 16:27:00.274793 containerd[1511]: time="2025-01-29T16:27:00.274753477Z" level=info msg="RemoveContainer for \"2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6\""
Jan 29 16:27:00.277955 containerd[1511]: time="2025-01-29T16:27:00.277933228Z" level=info msg="RemoveContainer for \"2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6\" returns successfully"
Jan 29 16:27:00.278086 kubelet[1845]: I0129 16:27:00.278070    1845 scope.go:117] "RemoveContainer" containerID="06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6"
Jan 29 16:27:00.278757 containerd[1511]: time="2025-01-29T16:27:00.278716168Z" level=info msg="RemoveContainer for \"06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6\""
Jan 29 16:27:00.281611 containerd[1511]: time="2025-01-29T16:27:00.281587791Z" level=info msg="RemoveContainer for \"06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6\" returns successfully"
Jan 29 16:27:00.281730 kubelet[1845]: I0129 16:27:00.281703    1845 scope.go:117] "RemoveContainer" containerID="9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25"
Jan 29 16:27:00.281856 containerd[1511]: time="2025-01-29T16:27:00.281825647Z" level=error msg="ContainerStatus for \"9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25\": not found"
Jan 29 16:27:00.281980 kubelet[1845]: E0129 16:27:00.281954    1845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25\": not found" containerID="9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25"
Jan 29 16:27:00.282021 kubelet[1845]: I0129 16:27:00.281979    1845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25"} err="failed to get container status \"9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25\": rpc error: code = NotFound desc = an error occurred when try to find container \"9da229004f7bae9f73259bb70f27ac270128f516037e1373178eb6370dc1ca25\": not found"
Jan 29 16:27:00.282021 kubelet[1845]: I0129 16:27:00.282012    1845 scope.go:117] "RemoveContainer" containerID="6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9"
Jan 29 16:27:00.282326 containerd[1511]: time="2025-01-29T16:27:00.282264000Z" level=error msg="ContainerStatus for \"6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9\": not found"
Jan 29 16:27:00.282447 kubelet[1845]: E0129 16:27:00.282431    1845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9\": not found" containerID="6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9"
Jan 29 16:27:00.282482 kubelet[1845]: I0129 16:27:00.282451    1845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9"} err="failed to get container status \"6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9\": rpc error: code = NotFound desc = an error occurred when try to find container \"6059328f50895363300c930ee94436bd9eb4a65d520ae3096dd5317d49b80ef9\": not found"
Jan 29 16:27:00.282482 kubelet[1845]: I0129 16:27:00.282464    1845 scope.go:117] "RemoveContainer" containerID="7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52"
Jan 29 16:27:00.282626 containerd[1511]: time="2025-01-29T16:27:00.282598138Z" level=error msg="ContainerStatus for \"7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52\": not found"
Jan 29 16:27:00.282748 kubelet[1845]: E0129 16:27:00.282712    1845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52\": not found" containerID="7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52"
Jan 29 16:27:00.282793 kubelet[1845]: I0129 16:27:00.282749    1845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52"} err="failed to get container status \"7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52\": rpc error: code = NotFound desc = an error occurred when try to find container \"7dd47018bfdd029503c917c69dcccaf61188dc5b3e09ae7ec56b3edc2019ab52\": not found"
Jan 29 16:27:00.282793 kubelet[1845]: I0129 16:27:00.282767    1845 scope.go:117] "RemoveContainer" containerID="2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6"
Jan 29 16:27:00.283019 containerd[1511]: time="2025-01-29T16:27:00.282970998Z" level=error msg="ContainerStatus for \"2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6\": not found"
Jan 29 16:27:00.283118 kubelet[1845]: E0129 16:27:00.283098    1845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6\": not found" containerID="2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6"
Jan 29 16:27:00.283150 kubelet[1845]: I0129 16:27:00.283117    1845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6"} err="failed to get container status \"2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"2edc577f522673b8118e5778c9592ae664d6756e27381ab43022afa80222e0e6\": not found"
Jan 29 16:27:00.283150 kubelet[1845]: I0129 16:27:00.283130    1845 scope.go:117] "RemoveContainer" containerID="06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6"
Jan 29 16:27:00.283290 containerd[1511]: time="2025-01-29T16:27:00.283253719Z" level=error msg="ContainerStatus for \"06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6\": not found"
Jan 29 16:27:00.283546 kubelet[1845]: E0129 16:27:00.283347    1845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6\": not found" containerID="06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6"
Jan 29 16:27:00.283546 kubelet[1845]: I0129 16:27:00.283362    1845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6"} err="failed to get container status \"06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6\": rpc error: code = NotFound desc = an error occurred when try to find container \"06f939328db71c0b4b85480bd6791efd9657fcf8b66f4689d3e26f6cd7118cc6\": not found"
Jan 29 16:27:00.614036 systemd[1]: var-lib-kubelet-pods-4c8b48e7\x2de1fe\x2d428d\x2d96e5\x2d4c39db533bf5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr9tp2.mount: Deactivated successfully.
Jan 29 16:27:00.614205 systemd[1]: var-lib-kubelet-pods-4c8b48e7\x2de1fe\x2d428d\x2d96e5\x2d4c39db533bf5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Jan 29 16:27:01.253339 kubelet[1845]: E0129 16:27:01.253264    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:27:01.796883 kubelet[1845]: I0129 16:27:01.796836    1845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c8b48e7-e1fe-428d-96e5-4c39db533bf5" path="/var/lib/kubelet/pods/4c8b48e7-e1fe-428d-96e5-4c39db533bf5/volumes"
Jan 29 16:27:02.254094 kubelet[1845]: E0129 16:27:02.254057    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:27:02.310632 kubelet[1845]: I0129 16:27:02.310589    1845 memory_manager.go:355] "RemoveStaleState removing state" podUID="4c8b48e7-e1fe-428d-96e5-4c39db533bf5" containerName="cilium-agent"
Jan 29 16:27:02.317565 systemd[1]: Created slice kubepods-besteffort-pod27aa0014_9a24_4838_9871_92430a755c95.slice - libcontainer container kubepods-besteffort-pod27aa0014_9a24_4838_9871_92430a755c95.slice.
Jan 29 16:27:02.331814 kubelet[1845]: W0129 16:27:02.331779    1845 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.0.0.148" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.148' and this object
Jan 29 16:27:02.331937 kubelet[1845]: E0129 16:27:02.331826    1845 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:10.0.0.148\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.148' and this object" logger="UnhandledError"
Jan 29 16:27:02.335871 systemd[1]: Created slice kubepods-burstable-pod9201f88d_b81a_421b_965b_39262a0a18a8.slice - libcontainer container kubepods-burstable-pod9201f88d_b81a_421b_965b_39262a0a18a8.slice.
Jan 29 16:27:02.414935 kubelet[1845]: I0129 16:27:02.414847    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27aa0014-9a24-4838-9871-92430a755c95-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-p26kl\" (UID: \"27aa0014-9a24-4838-9871-92430a755c95\") " pod="kube-system/cilium-operator-6c4d7847fc-p26kl"
Jan 29 16:27:02.414935 kubelet[1845]: I0129 16:27:02.414895    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9201f88d-b81a-421b-965b-39262a0a18a8-cilium-run\") pod \"cilium-jq7xb\" (UID: \"9201f88d-b81a-421b-965b-39262a0a18a8\") " pod="kube-system/cilium-jq7xb"
Jan 29 16:27:02.414935 kubelet[1845]: I0129 16:27:02.414935    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9201f88d-b81a-421b-965b-39262a0a18a8-bpf-maps\") pod \"cilium-jq7xb\" (UID: \"9201f88d-b81a-421b-965b-39262a0a18a8\") " pod="kube-system/cilium-jq7xb"
Jan 29 16:27:02.414935 kubelet[1845]: I0129 16:27:02.414951    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9201f88d-b81a-421b-965b-39262a0a18a8-etc-cni-netd\") pod \"cilium-jq7xb\" (UID: \"9201f88d-b81a-421b-965b-39262a0a18a8\") " pod="kube-system/cilium-jq7xb"
Jan 29 16:27:02.415200 kubelet[1845]: I0129 16:27:02.414964    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9201f88d-b81a-421b-965b-39262a0a18a8-lib-modules\") pod \"cilium-jq7xb\" (UID: \"9201f88d-b81a-421b-965b-39262a0a18a8\") " pod="kube-system/cilium-jq7xb"
Jan 29 16:27:02.415200 kubelet[1845]: I0129 16:27:02.415002    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9201f88d-b81a-421b-965b-39262a0a18a8-cilium-ipsec-secrets\") pod \"cilium-jq7xb\" (UID: \"9201f88d-b81a-421b-965b-39262a0a18a8\") " pod="kube-system/cilium-jq7xb"
Jan 29 16:27:02.415200 kubelet[1845]: I0129 16:27:02.415042    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9201f88d-b81a-421b-965b-39262a0a18a8-host-proc-sys-kernel\") pod \"cilium-jq7xb\" (UID: \"9201f88d-b81a-421b-965b-39262a0a18a8\") " pod="kube-system/cilium-jq7xb"
Jan 29 16:27:02.415200 kubelet[1845]: I0129 16:27:02.415065    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9201f88d-b81a-421b-965b-39262a0a18a8-cilium-cgroup\") pod \"cilium-jq7xb\" (UID: \"9201f88d-b81a-421b-965b-39262a0a18a8\") " pod="kube-system/cilium-jq7xb"
Jan 29 16:27:02.415200 kubelet[1845]: I0129 16:27:02.415082    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9201f88d-b81a-421b-965b-39262a0a18a8-cni-path\") pod \"cilium-jq7xb\" (UID: \"9201f88d-b81a-421b-965b-39262a0a18a8\") " pod="kube-system/cilium-jq7xb"
Jan 29 16:27:02.415200 kubelet[1845]: I0129 16:27:02.415104    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9201f88d-b81a-421b-965b-39262a0a18a8-host-proc-sys-net\") pod \"cilium-jq7xb\" (UID: \"9201f88d-b81a-421b-965b-39262a0a18a8\") " pod="kube-system/cilium-jq7xb"
Jan 29 16:27:02.415369 kubelet[1845]: I0129 16:27:02.415133    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb2dl\" (UniqueName: \"kubernetes.io/projected/9201f88d-b81a-421b-965b-39262a0a18a8-kube-api-access-rb2dl\") pod \"cilium-jq7xb\" (UID: \"9201f88d-b81a-421b-965b-39262a0a18a8\") " pod="kube-system/cilium-jq7xb"
Jan 29 16:27:02.415369 kubelet[1845]: I0129 16:27:02.415167    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b48nm\" (UniqueName: \"kubernetes.io/projected/27aa0014-9a24-4838-9871-92430a755c95-kube-api-access-b48nm\") pod \"cilium-operator-6c4d7847fc-p26kl\" (UID: \"27aa0014-9a24-4838-9871-92430a755c95\") " pod="kube-system/cilium-operator-6c4d7847fc-p26kl"
Jan 29 16:27:02.415369 kubelet[1845]: I0129 16:27:02.415192    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9201f88d-b81a-421b-965b-39262a0a18a8-hostproc\") pod \"cilium-jq7xb\" (UID: \"9201f88d-b81a-421b-965b-39262a0a18a8\") " pod="kube-system/cilium-jq7xb"
Jan 29 16:27:02.415369 kubelet[1845]: I0129 16:27:02.415208    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9201f88d-b81a-421b-965b-39262a0a18a8-xtables-lock\") pod \"cilium-jq7xb\" (UID: \"9201f88d-b81a-421b-965b-39262a0a18a8\") " pod="kube-system/cilium-jq7xb"
Jan 29 16:27:02.415369 kubelet[1845]: I0129 16:27:02.415225    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9201f88d-b81a-421b-965b-39262a0a18a8-clustermesh-secrets\") pod \"cilium-jq7xb\" (UID: \"9201f88d-b81a-421b-965b-39262a0a18a8\") " pod="kube-system/cilium-jq7xb"
Jan 29 16:27:02.415505 kubelet[1845]: I0129 16:27:02.415247    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9201f88d-b81a-421b-965b-39262a0a18a8-cilium-config-path\") pod \"cilium-jq7xb\" (UID: \"9201f88d-b81a-421b-965b-39262a0a18a8\") " pod="kube-system/cilium-jq7xb"
Jan 29 16:27:02.415505 kubelet[1845]: I0129 16:27:02.415261    1845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9201f88d-b81a-421b-965b-39262a0a18a8-hubble-tls\") pod \"cilium-jq7xb\" (UID: \"9201f88d-b81a-421b-965b-39262a0a18a8\") " pod="kube-system/cilium-jq7xb"
Jan 29 16:27:02.620858 containerd[1511]: time="2025-01-29T16:27:02.620737089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-p26kl,Uid:27aa0014-9a24-4838-9871-92430a755c95,Namespace:kube-system,Attempt:0,}"
Jan 29 16:27:02.641179 containerd[1511]: time="2025-01-29T16:27:02.641063772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 29 16:27:02.641387 containerd[1511]: time="2025-01-29T16:27:02.641222019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 29 16:27:02.641387 containerd[1511]: time="2025-01-29T16:27:02.641242377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:27:02.641387 containerd[1511]: time="2025-01-29T16:27:02.641323901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:27:02.662055 systemd[1]: Started cri-containerd-743a76f9ffcb43a1ec1a9148a500a9ce0e3bccafb8ce16d75fbce6cd0fd59570.scope - libcontainer container 743a76f9ffcb43a1ec1a9148a500a9ce0e3bccafb8ce16d75fbce6cd0fd59570.
Jan 29 16:27:02.697189 containerd[1511]: time="2025-01-29T16:27:02.697150596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-p26kl,Uid:27aa0014-9a24-4838-9871-92430a755c95,Namespace:kube-system,Attempt:0,} returns sandbox id \"743a76f9ffcb43a1ec1a9148a500a9ce0e3bccafb8ce16d75fbce6cd0fd59570\""
Jan 29 16:27:02.698589 containerd[1511]: time="2025-01-29T16:27:02.698569450Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Jan 29 16:27:02.775405 kubelet[1845]: E0129 16:27:02.775335    1845 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jan 29 16:27:03.254595 kubelet[1845]: E0129 16:27:03.254521    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:27:03.544900 containerd[1511]: time="2025-01-29T16:27:03.544803944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jq7xb,Uid:9201f88d-b81a-421b-965b-39262a0a18a8,Namespace:kube-system,Attempt:0,}"
Jan 29 16:27:03.563652 containerd[1511]: time="2025-01-29T16:27:03.563033498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 29 16:27:03.563652 containerd[1511]: time="2025-01-29T16:27:03.563610221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 29 16:27:03.563652 containerd[1511]: time="2025-01-29T16:27:03.563622283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:27:03.563859 containerd[1511]: time="2025-01-29T16:27:03.563699318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:27:03.585041 systemd[1]: Started cri-containerd-ec58032e14e7fc84444b5483eca3902bf8fad294bef421481132cd4a504f2a99.scope - libcontainer container ec58032e14e7fc84444b5483eca3902bf8fad294bef421481132cd4a504f2a99.
Jan 29 16:27:03.604541 containerd[1511]: time="2025-01-29T16:27:03.604503112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jq7xb,Uid:9201f88d-b81a-421b-965b-39262a0a18a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec58032e14e7fc84444b5483eca3902bf8fad294bef421481132cd4a504f2a99\""
Jan 29 16:27:03.606613 containerd[1511]: time="2025-01-29T16:27:03.606589649Z" level=info msg="CreateContainer within sandbox \"ec58032e14e7fc84444b5483eca3902bf8fad294bef421481132cd4a504f2a99\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Jan 29 16:27:03.901073 containerd[1511]: time="2025-01-29T16:27:03.900936387Z" level=info msg="CreateContainer within sandbox \"ec58032e14e7fc84444b5483eca3902bf8fad294bef421481132cd4a504f2a99\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d449ce5f5654662a60c3d9a4deecd7d41f689bb63ebe2cf005271f21587c7468\""
Jan 29 16:27:03.901537 containerd[1511]: time="2025-01-29T16:27:03.901496840Z" level=info msg="StartContainer for \"d449ce5f5654662a60c3d9a4deecd7d41f689bb63ebe2cf005271f21587c7468\""
Jan 29 16:27:03.927041 systemd[1]: Started cri-containerd-d449ce5f5654662a60c3d9a4deecd7d41f689bb63ebe2cf005271f21587c7468.scope - libcontainer container d449ce5f5654662a60c3d9a4deecd7d41f689bb63ebe2cf005271f21587c7468.
Jan 29 16:27:03.951167 containerd[1511]: time="2025-01-29T16:27:03.951127846Z" level=info msg="StartContainer for \"d449ce5f5654662a60c3d9a4deecd7d41f689bb63ebe2cf005271f21587c7468\" returns successfully"
Jan 29 16:27:03.958451 systemd[1]: cri-containerd-d449ce5f5654662a60c3d9a4deecd7d41f689bb63ebe2cf005271f21587c7468.scope: Deactivated successfully.
Jan 29 16:27:03.988092 containerd[1511]: time="2025-01-29T16:27:03.988026067Z" level=info msg="shim disconnected" id=d449ce5f5654662a60c3d9a4deecd7d41f689bb63ebe2cf005271f21587c7468 namespace=k8s.io
Jan 29 16:27:03.988092 containerd[1511]: time="2025-01-29T16:27:03.988076392Z" level=warning msg="cleaning up after shim disconnected" id=d449ce5f5654662a60c3d9a4deecd7d41f689bb63ebe2cf005271f21587c7468 namespace=k8s.io
Jan 29 16:27:03.988092 containerd[1511]: time="2025-01-29T16:27:03.988084457Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 29 16:27:04.255392 kubelet[1845]: E0129 16:27:04.255338    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:27:04.268615 containerd[1511]: time="2025-01-29T16:27:04.268570138Z" level=info msg="CreateContainer within sandbox \"ec58032e14e7fc84444b5483eca3902bf8fad294bef421481132cd4a504f2a99\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Jan 29 16:27:04.281640 containerd[1511]: time="2025-01-29T16:27:04.281570764Z" level=info msg="CreateContainer within sandbox \"ec58032e14e7fc84444b5483eca3902bf8fad294bef421481132cd4a504f2a99\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e6cace988c0e81a2faa54ae692667098b0b2db14679d67fc8f49dd0a4d753b19\""
Jan 29 16:27:04.282098 containerd[1511]: time="2025-01-29T16:27:04.282076985Z" level=info msg="StartContainer for \"e6cace988c0e81a2faa54ae692667098b0b2db14679d67fc8f49dd0a4d753b19\""
Jan 29 16:27:04.308071 systemd[1]: Started cri-containerd-e6cace988c0e81a2faa54ae692667098b0b2db14679d67fc8f49dd0a4d753b19.scope - libcontainer container e6cace988c0e81a2faa54ae692667098b0b2db14679d67fc8f49dd0a4d753b19.
Jan 29 16:27:04.345842 systemd[1]: cri-containerd-e6cace988c0e81a2faa54ae692667098b0b2db14679d67fc8f49dd0a4d753b19.scope: Deactivated successfully.
Jan 29 16:27:04.393703 containerd[1511]: time="2025-01-29T16:27:04.393640275Z" level=info msg="StartContainer for \"e6cace988c0e81a2faa54ae692667098b0b2db14679d67fc8f49dd0a4d753b19\" returns successfully"
Jan 29 16:27:04.429951 containerd[1511]: time="2025-01-29T16:27:04.429868104Z" level=info msg="shim disconnected" id=e6cace988c0e81a2faa54ae692667098b0b2db14679d67fc8f49dd0a4d753b19 namespace=k8s.io
Jan 29 16:27:04.429951 containerd[1511]: time="2025-01-29T16:27:04.429953695Z" level=warning msg="cleaning up after shim disconnected" id=e6cace988c0e81a2faa54ae692667098b0b2db14679d67fc8f49dd0a4d753b19 namespace=k8s.io
Jan 29 16:27:04.430163 containerd[1511]: time="2025-01-29T16:27:04.429965346Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 29 16:27:05.256168 kubelet[1845]: E0129 16:27:05.256098    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:27:05.271010 containerd[1511]: time="2025-01-29T16:27:05.270967081Z" level=info msg="CreateContainer within sandbox \"ec58032e14e7fc84444b5483eca3902bf8fad294bef421481132cd4a504f2a99\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Jan 29 16:27:05.482258 containerd[1511]: time="2025-01-29T16:27:05.482211763Z" level=info msg="CreateContainer within sandbox \"ec58032e14e7fc84444b5483eca3902bf8fad294bef421481132cd4a504f2a99\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"85c3f2df2b69279792b1d3da36d3794beac1f97d8ba98d187aa7f78360d3c1c3\""
Jan 29 16:27:05.482749 containerd[1511]: time="2025-01-29T16:27:05.482721200Z" level=info msg="StartContainer for \"85c3f2df2b69279792b1d3da36d3794beac1f97d8ba98d187aa7f78360d3c1c3\""
Jan 29 16:27:05.509122 systemd[1]: Started cri-containerd-85c3f2df2b69279792b1d3da36d3794beac1f97d8ba98d187aa7f78360d3c1c3.scope - libcontainer container 85c3f2df2b69279792b1d3da36d3794beac1f97d8ba98d187aa7f78360d3c1c3.
Jan 29 16:27:05.539690 containerd[1511]: time="2025-01-29T16:27:05.539640986Z" level=info msg="StartContainer for \"85c3f2df2b69279792b1d3da36d3794beac1f97d8ba98d187aa7f78360d3c1c3\" returns successfully"
Jan 29 16:27:05.540838 systemd[1]: cri-containerd-85c3f2df2b69279792b1d3da36d3794beac1f97d8ba98d187aa7f78360d3c1c3.scope: Deactivated successfully.
Jan 29 16:27:05.560219 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85c3f2df2b69279792b1d3da36d3794beac1f97d8ba98d187aa7f78360d3c1c3-rootfs.mount: Deactivated successfully.
Jan 29 16:27:05.564400 containerd[1511]: time="2025-01-29T16:27:05.564335410Z" level=info msg="shim disconnected" id=85c3f2df2b69279792b1d3da36d3794beac1f97d8ba98d187aa7f78360d3c1c3 namespace=k8s.io
Jan 29 16:27:05.564496 containerd[1511]: time="2025-01-29T16:27:05.564402105Z" level=warning msg="cleaning up after shim disconnected" id=85c3f2df2b69279792b1d3da36d3794beac1f97d8ba98d187aa7f78360d3c1c3 namespace=k8s.io
Jan 29 16:27:05.564496 containerd[1511]: time="2025-01-29T16:27:05.564414980Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 29 16:27:05.779098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1987728469.mount: Deactivated successfully.
Jan 29 16:27:06.256830 kubelet[1845]: E0129 16:27:06.256764    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:27:06.275469 containerd[1511]: time="2025-01-29T16:27:06.275425585Z" level=info msg="CreateContainer within sandbox \"ec58032e14e7fc84444b5483eca3902bf8fad294bef421481132cd4a504f2a99\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Jan 29 16:27:06.531245 containerd[1511]: time="2025-01-29T16:27:06.530743702Z" level=info msg="CreateContainer within sandbox \"ec58032e14e7fc84444b5483eca3902bf8fad294bef421481132cd4a504f2a99\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2a0b3d65a810c7c872aceaa93c984dc4bdc2b36fa5761d937952ea7021b62d1b\""
Jan 29 16:27:06.531620 containerd[1511]: time="2025-01-29T16:27:06.531448414Z" level=info msg="StartContainer for \"2a0b3d65a810c7c872aceaa93c984dc4bdc2b36fa5761d937952ea7021b62d1b\""
Jan 29 16:27:06.566056 systemd[1]: Started cri-containerd-2a0b3d65a810c7c872aceaa93c984dc4bdc2b36fa5761d937952ea7021b62d1b.scope - libcontainer container 2a0b3d65a810c7c872aceaa93c984dc4bdc2b36fa5761d937952ea7021b62d1b.
Jan 29 16:27:06.592887 systemd[1]: cri-containerd-2a0b3d65a810c7c872aceaa93c984dc4bdc2b36fa5761d937952ea7021b62d1b.scope: Deactivated successfully.
Jan 29 16:27:06.675466 containerd[1511]: time="2025-01-29T16:27:06.675195761Z" level=info msg="StartContainer for \"2a0b3d65a810c7c872aceaa93c984dc4bdc2b36fa5761d937952ea7021b62d1b\" returns successfully"
Jan 29 16:27:06.697316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a0b3d65a810c7c872aceaa93c984dc4bdc2b36fa5761d937952ea7021b62d1b-rootfs.mount: Deactivated successfully.
Jan 29 16:27:06.934568 containerd[1511]: time="2025-01-29T16:27:06.934401433Z" level=info msg="shim disconnected" id=2a0b3d65a810c7c872aceaa93c984dc4bdc2b36fa5761d937952ea7021b62d1b namespace=k8s.io
Jan 29 16:27:06.934568 containerd[1511]: time="2025-01-29T16:27:06.934455655Z" level=warning msg="cleaning up after shim disconnected" id=2a0b3d65a810c7c872aceaa93c984dc4bdc2b36fa5761d937952ea7021b62d1b namespace=k8s.io
Jan 29 16:27:06.934568 containerd[1511]: time="2025-01-29T16:27:06.934463890Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 29 16:27:06.957547 containerd[1511]: time="2025-01-29T16:27:06.957491243Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:27:06.958182 containerd[1511]: time="2025-01-29T16:27:06.958122158Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197"
Jan 29 16:27:06.959163 containerd[1511]: time="2025-01-29T16:27:06.959126512Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:27:06.960411 containerd[1511]: time="2025-01-29T16:27:06.960371930Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.261777232s"
Jan 29 16:27:06.960450 containerd[1511]: time="2025-01-29T16:27:06.960409711Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\""
Jan 29 16:27:06.962263 containerd[1511]: time="2025-01-29T16:27:06.962155708Z" level=info msg="CreateContainer within sandbox \"743a76f9ffcb43a1ec1a9148a500a9ce0e3bccafb8ce16d75fbce6cd0fd59570\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Jan 29 16:27:06.974321 containerd[1511]: time="2025-01-29T16:27:06.974289163Z" level=info msg="CreateContainer within sandbox \"743a76f9ffcb43a1ec1a9148a500a9ce0e3bccafb8ce16d75fbce6cd0fd59570\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b6c69a6a00cac382d4d8e6d0984e34649a72e9d39585ff5ecdce4cdddc12d283\""
Jan 29 16:27:06.974848 containerd[1511]: time="2025-01-29T16:27:06.974815931Z" level=info msg="StartContainer for \"b6c69a6a00cac382d4d8e6d0984e34649a72e9d39585ff5ecdce4cdddc12d283\""
Jan 29 16:27:07.002042 systemd[1]: Started cri-containerd-b6c69a6a00cac382d4d8e6d0984e34649a72e9d39585ff5ecdce4cdddc12d283.scope - libcontainer container b6c69a6a00cac382d4d8e6d0984e34649a72e9d39585ff5ecdce4cdddc12d283.
Jan 29 16:27:07.057963 containerd[1511]: time="2025-01-29T16:27:07.057890167Z" level=info msg="StartContainer for \"b6c69a6a00cac382d4d8e6d0984e34649a72e9d39585ff5ecdce4cdddc12d283\" returns successfully"
Jan 29 16:27:07.257865 kubelet[1845]: E0129 16:27:07.257812    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:27:07.280227 containerd[1511]: time="2025-01-29T16:27:07.280182721Z" level=info msg="CreateContainer within sandbox \"ec58032e14e7fc84444b5483eca3902bf8fad294bef421481132cd4a504f2a99\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Jan 29 16:27:07.286820 kubelet[1845]: I0129 16:27:07.286757    1845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-p26kl" podStartSLOduration=1.023770928 podStartE2EDuration="5.286740149s" podCreationTimestamp="2025-01-29 16:27:02 +0000 UTC" firstStartedPulling="2025-01-29 16:27:02.698159169 +0000 UTC m=+66.625797637" lastFinishedPulling="2025-01-29 16:27:06.96112838 +0000 UTC m=+70.888766858" observedRunningTime="2025-01-29 16:27:07.28652744 +0000 UTC m=+71.214165918" watchObservedRunningTime="2025-01-29 16:27:07.286740149 +0000 UTC m=+71.214378627"
Jan 29 16:27:07.297404 containerd[1511]: time="2025-01-29T16:27:07.297364139Z" level=info msg="CreateContainer within sandbox \"ec58032e14e7fc84444b5483eca3902bf8fad294bef421481132cd4a504f2a99\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"84153af4759e8c152631305b5ba8d834d37219a6dafbf4b3ec5bbf2765d8d932\""
Jan 29 16:27:07.297846 containerd[1511]: time="2025-01-29T16:27:07.297811409Z" level=info msg="StartContainer for \"84153af4759e8c152631305b5ba8d834d37219a6dafbf4b3ec5bbf2765d8d932\""
Jan 29 16:27:07.330140 systemd[1]: Started cri-containerd-84153af4759e8c152631305b5ba8d834d37219a6dafbf4b3ec5bbf2765d8d932.scope - libcontainer container 84153af4759e8c152631305b5ba8d834d37219a6dafbf4b3ec5bbf2765d8d932.
Jan 29 16:27:07.360939 containerd[1511]: time="2025-01-29T16:27:07.360887176Z" level=info msg="StartContainer for \"84153af4759e8c152631305b5ba8d834d37219a6dafbf4b3ec5bbf2765d8d932\" returns successfully"
Jan 29 16:27:07.772944 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni))
Jan 29 16:27:08.258035 kubelet[1845]: E0129 16:27:08.257973    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:27:08.296413 kubelet[1845]: I0129 16:27:08.296299    1845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jq7xb" podStartSLOduration=6.296278264 podStartE2EDuration="6.296278264s" podCreationTimestamp="2025-01-29 16:27:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:27:08.296148149 +0000 UTC m=+72.223786647" watchObservedRunningTime="2025-01-29 16:27:08.296278264 +0000 UTC m=+72.223916742"
Jan 29 16:27:09.259078 kubelet[1845]: E0129 16:27:09.259020    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:27:10.259759 kubelet[1845]: E0129 16:27:10.259712    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:27:10.974810 systemd-networkd[1445]: lxc_health: Link UP
Jan 29 16:27:10.975191 systemd-networkd[1445]: lxc_health: Gained carrier
Jan 29 16:27:11.260495 kubelet[1845]: E0129 16:27:11.260433    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:27:12.261470 kubelet[1845]: E0129 16:27:12.261390    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:27:12.835234 systemd-networkd[1445]: lxc_health: Gained IPv6LL
Jan 29 16:27:13.262321 kubelet[1845]: E0129 16:27:13.262267    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:27:14.263357 kubelet[1845]: E0129 16:27:14.263303    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:27:15.264389 kubelet[1845]: E0129 16:27:15.264038    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:27:16.264755 kubelet[1845]: E0129 16:27:16.264703    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:27:17.208017 kubelet[1845]: E0129 16:27:17.207976    1845 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:27:17.265014 kubelet[1845]: E0129 16:27:17.264972    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 16:27:18.265905 kubelet[1845]: E0129 16:27:18.265843    1845 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"