Dec 13 01:28:06.909136 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:28:06.909157 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:28:06.909168 kernel: BIOS-provided physical RAM map: Dec 13 01:28:06.909175 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:28:06.909181 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 01:28:06.909187 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 01:28:06.909194 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 01:28:06.909200 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 01:28:06.909206 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 01:28:06.909212 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 01:28:06.909221 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Dec 13 01:28:06.909227 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Dec 13 01:28:06.909233 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Dec 13 01:28:06.909239 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Dec 13 01:28:06.909247 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 01:28:06.909253 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 01:28:06.909262 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 01:28:06.909269 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 01:28:06.909275 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 01:28:06.909282 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:28:06.909288 kernel: NX (Execute Disable) protection: active Dec 13 01:28:06.909295 kernel: APIC: Static calls initialized Dec 13 01:28:06.909301 kernel: efi: EFI v2.7 by EDK II Dec 13 01:28:06.909308 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Dec 13 01:28:06.909315 kernel: SMBIOS 2.8 present. Dec 13 01:28:06.909321 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Dec 13 01:28:06.909328 kernel: Hypervisor detected: KVM Dec 13 01:28:06.909337 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:28:06.909343 kernel: kvm-clock: using sched offset of 4228084972 cycles Dec 13 01:28:06.909350 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:28:06.909357 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:28:06.909364 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:28:06.909372 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:28:06.909379 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Dec 13 01:28:06.909385 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 01:28:06.909392 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:28:06.909401 kernel: Using GB pages for direct mapping Dec 13 01:28:06.909408 kernel: Secure boot disabled Dec 13 01:28:06.909415 kernel: ACPI: Early table checksum verification disabled Dec 13 01:28:06.909422 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 13 01:28:06.909432 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:28:06.909439 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:28:06.909447 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:28:06.909456 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 13 01:28:06.909463 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:28:06.909470 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:28:06.909477 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:28:06.909484 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:28:06.909491 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 01:28:06.909499 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 13 01:28:06.909508 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Dec 13 01:28:06.909517 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 13 01:28:06.909525 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 13 01:28:06.909533 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 13 01:28:06.909541 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 13 01:28:06.909548 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 13 01:28:06.909555 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 13 01:28:06.909562 kernel: No NUMA configuration found Dec 13 01:28:06.909569 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Dec 13 01:28:06.909579 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Dec 13 01:28:06.909586 kernel: Zone ranges: Dec 13 01:28:06.909593 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:28:06.909600 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Dec 13 01:28:06.909607 kernel: Normal empty Dec 13 01:28:06.909614 kernel: Movable zone start for each node Dec 13 01:28:06.909621 kernel: Early memory node ranges Dec 13 01:28:06.909628 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 01:28:06.909635 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 13 01:28:06.909642 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 13 01:28:06.909652 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Dec 13 01:28:06.909659 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Dec 13 01:28:06.909666 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Dec 13 01:28:06.909673 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Dec 13 01:28:06.909680 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:28:06.909687 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 01:28:06.909694 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 13 01:28:06.909701 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:28:06.909708 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Dec 13 01:28:06.909717 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 13 01:28:06.909725 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Dec 13 01:28:06.909732 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:28:06.909738 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:28:06.909745 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:28:06.909752 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:28:06.909759 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:28:06.909766 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:28:06.909773 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:28:06.909783 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:28:06.909790 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:28:06.909797 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:28:06.909804 kernel: TSC deadline timer available Dec 13 01:28:06.909811 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:28:06.909818 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:28:06.909825 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:28:06.909832 kernel: kvm-guest: setup PV sched yield Dec 13 01:28:06.909839 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:28:06.909846 kernel: Booting paravirtualized kernel on KVM Dec 13 01:28:06.909855 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:28:06.909862 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:28:06.909869 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 01:28:06.909876 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 01:28:06.909883 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:28:06.909890 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:28:06.909897 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:28:06.909905 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:28:06.909915 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:28:06.909922 kernel: random: crng init done Dec 13 01:28:06.909938 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:28:06.909946 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:28:06.909953 kernel: Fallback order for Node 0: 0 Dec 13 01:28:06.909960 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Dec 13 01:28:06.909967 kernel: Policy zone: DMA32 Dec 13 01:28:06.909975 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:28:06.909982 kernel: Memory: 2395612K/2567000K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 171128K reserved, 0K cma-reserved) Dec 13 01:28:06.910034 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:28:06.910041 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:28:06.910048 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:28:06.910055 kernel: Dynamic Preempt: voluntary Dec 13 01:28:06.910070 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:28:06.910080 kernel: rcu: RCU event tracing is enabled. Dec 13 01:28:06.910088 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:28:06.910096 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:28:06.910103 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:28:06.910110 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:28:06.910118 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:28:06.910125 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:28:06.910135 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:28:06.910142 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:28:06.910149 kernel: Console: colour dummy device 80x25 Dec 13 01:28:06.910157 kernel: printk: console [ttyS0] enabled Dec 13 01:28:06.910164 kernel: ACPI: Core revision 20230628 Dec 13 01:28:06.910174 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:28:06.910181 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:28:06.910189 kernel: x2apic enabled Dec 13 01:28:06.910196 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:28:06.910204 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 01:28:06.910211 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 01:28:06.910218 kernel: kvm-guest: setup PV IPIs Dec 13 01:28:06.910226 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:28:06.910233 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:28:06.910243 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:28:06.910250 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:28:06.910258 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:28:06.910265 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:28:06.910273 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:28:06.910280 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:28:06.910287 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:28:06.910295 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:28:06.910302 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:28:06.910312 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:28:06.910319 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:28:06.910327 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:28:06.910335 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:28:06.910343 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:28:06.910350 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:28:06.910358 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:28:06.910365 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:28:06.910375 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:28:06.910382 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:28:06.910390 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:28:06.910397 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:28:06.910405 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:28:06.910412 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:28:06.910419 kernel: landlock: Up and running. Dec 13 01:28:06.910427 kernel: SELinux: Initializing. Dec 13 01:28:06.910434 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:28:06.910444 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:28:06.910451 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:28:06.910459 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:28:06.910466 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:28:06.910474 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:28:06.910481 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:28:06.910489 kernel: ... version: 0 Dec 13 01:28:06.910496 kernel: ... bit width: 48 Dec 13 01:28:06.910504 kernel: ... generic registers: 6 Dec 13 01:28:06.910513 kernel: ... value mask: 0000ffffffffffff Dec 13 01:28:06.910521 kernel: ... max period: 00007fffffffffff Dec 13 01:28:06.910528 kernel: ... fixed-purpose events: 0 Dec 13 01:28:06.910535 kernel: ... event mask: 000000000000003f Dec 13 01:28:06.910543 kernel: signal: max sigframe size: 1776 Dec 13 01:28:06.910550 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:28:06.910558 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:28:06.910565 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:28:06.910572 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:28:06.910582 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 01:28:06.910589 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:28:06.910597 kernel: smpboot: Max logical packages: 1 Dec 13 01:28:06.910604 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:28:06.910611 kernel: devtmpfs: initialized Dec 13 01:28:06.910619 kernel: x86/mm: Memory block size: 128MB Dec 13 01:28:06.910626 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 13 01:28:06.910634 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 13 01:28:06.910641 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Dec 13 01:28:06.910651 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 13 01:28:06.910659 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 13 01:28:06.910666 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:28:06.910674 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:28:06.910681 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:28:06.910689 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:28:06.910697 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:28:06.910704 kernel: audit: type=2000 audit(1734053286.915:1): state=initialized audit_enabled=0 res=1 Dec 13 01:28:06.910711 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:28:06.910721 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:28:06.910729 kernel: cpuidle: using governor menu Dec 13 01:28:06.910736 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:28:06.910743 kernel: dca service started, version 1.12.1 Dec 13 01:28:06.910751 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:28:06.910759 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 01:28:06.910766 kernel: PCI: Using configuration type 1 for base access Dec 13 01:28:06.910773 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:28:06.910781 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:28:06.910791 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:28:06.910798 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:28:06.910805 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:28:06.910813 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:28:06.910820 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:28:06.910827 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:28:06.910835 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:28:06.910842 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:28:06.910850 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:28:06.910859 kernel: ACPI: Interpreter enabled Dec 13 01:28:06.910867 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:28:06.910874 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:28:06.910882 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:28:06.910889 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:28:06.910896 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:28:06.910904 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:28:06.911128 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:28:06.911280 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:28:06.911402 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:28:06.911412 kernel: PCI host bridge to bus 0000:00 Dec 13 01:28:06.911536 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:28:06.911647 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:28:06.911757 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:28:06.911867 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:28:06.912003 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:28:06.912116 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Dec 13 01:28:06.912225 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:28:06.912361 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:28:06.912490 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:28:06.912612 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Dec 13 01:28:06.912739 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Dec 13 01:28:06.912858 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Dec 13 01:28:06.913000 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Dec 13 01:28:06.913125 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:28:06.913253 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:28:06.913376 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Dec 13 01:28:06.913496 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Dec 13 01:28:06.913623 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Dec 13 01:28:06.913790 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:28:06.913914 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Dec 13 01:28:06.914057 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Dec 13 01:28:06.914178 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Dec 13 01:28:06.914305 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:28:06.914430 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Dec 13 01:28:06.914550 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Dec 13 01:28:06.914669 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Dec 13 01:28:06.914791 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Dec 13 01:28:06.914918 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:28:06.915059 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:28:06.915210 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:28:06.915339 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Dec 13 01:28:06.915460 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Dec 13 01:28:06.915587 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:28:06.915750 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Dec 13 01:28:06.915763 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:28:06.915771 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:28:06.915779 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:28:06.915787 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:28:06.915799 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:28:06.915806 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:28:06.915814 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:28:06.915822 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:28:06.915830 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:28:06.915838 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:28:06.915845 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:28:06.915853 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:28:06.915861 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:28:06.915870 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:28:06.915878 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:28:06.915886 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:28:06.915893 kernel: iommu: Default domain type: Translated Dec 13 01:28:06.915901 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:28:06.915909 kernel: efivars: Registered efivars operations Dec 13 01:28:06.915916 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:28:06.915935 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:28:06.915943 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 13 01:28:06.915953 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Dec 13 01:28:06.915961 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Dec 13 01:28:06.915969 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Dec 13 01:28:06.916106 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:28:06.916227 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:28:06.916346 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:28:06.916357 kernel: vgaarb: loaded Dec 13 01:28:06.916364 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:28:06.916372 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:28:06.916384 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:28:06.916391 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:28:06.916399 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:28:06.916407 kernel: pnp: PnP ACPI init Dec 13 01:28:06.916538 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:28:06.916549 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:28:06.916557 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:28:06.916565 kernel: NET: Registered PF_INET protocol family Dec 13 01:28:06.916576 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:28:06.916584 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:28:06.916591 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:28:06.916599 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:28:06.916607 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:28:06.916614 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:28:06.916622 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:28:06.916630 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:28:06.916640 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:28:06.916648 kernel: NET: Registered PF_XDP protocol family Dec 13 01:28:06.916771 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Dec 13 01:28:06.916890 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Dec 13 01:28:06.917026 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:28:06.917137 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:28:06.917246 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:28:06.917355 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:28:06.917469 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:28:06.917578 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Dec 13 01:28:06.917587 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:28:06.917596 kernel: Initialise system trusted keyrings Dec 13 01:28:06.917605 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:28:06.917613 kernel: Key type asymmetric registered Dec 13 01:28:06.917621 kernel: Asymmetric key parser 'x509' registered Dec 13 01:28:06.917630 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:28:06.917638 kernel: io scheduler mq-deadline registered Dec 13 01:28:06.917652 kernel: io scheduler kyber registered Dec 13 01:28:06.917660 kernel: io scheduler bfq registered Dec 13 01:28:06.917668 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:28:06.917676 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:28:06.917684 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:28:06.917691 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:28:06.917699 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:28:06.917707 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:28:06.917714 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:28:06.917724 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:28:06.917732 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:28:06.917740 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:28:06.917863 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:28:06.918001 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:28:06.918118 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:28:06 UTC (1734053286) Dec 13 01:28:06.918230 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:28:06.918240 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:28:06.918253 kernel: efifb: probing for efifb Dec 13 01:28:06.918261 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Dec 13 01:28:06.918269 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Dec 13 01:28:06.918276 kernel: efifb: scrolling: redraw Dec 13 01:28:06.918284 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Dec 13 01:28:06.918292 kernel: Console: switching to colour frame buffer device 100x37 Dec 13 01:28:06.918318 kernel: fb0: EFI VGA frame buffer device Dec 13 01:28:06.918329 kernel: pstore: Using crash dump compression: deflate Dec 13 01:28:06.918337 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:28:06.918347 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:28:06.918355 kernel: Segment Routing with IPv6 Dec 13 01:28:06.918363 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:28:06.918371 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:28:06.918379 kernel: Key type dns_resolver registered Dec 13 01:28:06.918387 kernel: IPI shorthand broadcast: enabled Dec 13 01:28:06.918395 kernel: sched_clock: Marking stable (573002816, 114033045)->(733853447, -46817586) Dec 13 01:28:06.918403 kernel: registered taskstats version 1 Dec 13 01:28:06.918410 kernel: Loading compiled-in X.509 certificates Dec 13 01:28:06.918421 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:28:06.918429 kernel: Key type .fscrypt registered Dec 13 01:28:06.918439 kernel: Key type fscrypt-provisioning registered Dec 13 01:28:06.918447 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:28:06.918455 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:28:06.918463 kernel: ima: No architecture policies found Dec 13 01:28:06.918471 kernel: clk: Disabling unused clocks Dec 13 01:28:06.918479 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:28:06.918487 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:28:06.918497 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:28:06.918505 kernel: Run /init as init process Dec 13 01:28:06.918513 kernel: with arguments: Dec 13 01:28:06.918520 kernel: /init Dec 13 01:28:06.918529 kernel: with environment: Dec 13 01:28:06.918536 kernel: HOME=/ Dec 13 01:28:06.918544 kernel: TERM=linux Dec 13 01:28:06.918554 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:28:06.918564 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:28:06.918577 systemd[1]: Detected virtualization kvm. Dec 13 01:28:06.918586 systemd[1]: Detected architecture x86-64. Dec 13 01:28:06.918594 systemd[1]: Running in initrd. Dec 13 01:28:06.918605 systemd[1]: No hostname configured, using default hostname. Dec 13 01:28:06.918615 systemd[1]: Hostname set to . Dec 13 01:28:06.918624 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:28:06.918632 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:28:06.918641 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:28:06.918650 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:28:06.918659 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:28:06.918667 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:28:06.918676 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:28:06.918687 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:28:06.918697 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:28:06.918706 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:28:06.918715 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:28:06.918723 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:28:06.918732 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:28:06.918740 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:28:06.918751 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:28:06.918759 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:28:06.918768 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:28:06.918776 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:28:06.918785 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:28:06.918793 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:28:06.918802 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:28:06.918810 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:28:06.918821 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:28:06.918829 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:28:06.918838 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:28:06.918846 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:28:06.918854 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:28:06.918862 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:28:06.918871 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:28:06.918879 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:28:06.918888 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:28:06.918898 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:28:06.918907 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:28:06.918916 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:28:06.918934 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:28:06.918965 systemd-journald[193]: Collecting audit messages is disabled. Dec 13 01:28:06.919000 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:28:06.919009 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:28:06.919018 systemd-journald[193]: Journal started Dec 13 01:28:06.919040 systemd-journald[193]: Runtime Journal (/run/log/journal/7ae0aea8e73c42cbacc8d88703f9824b) is 6.0M, max 48.3M, 42.2M free. Dec 13 01:28:06.921108 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:28:06.922023 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:28:06.925509 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:28:06.926399 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:28:06.934272 systemd-modules-load[194]: Inserted module 'overlay' Dec 13 01:28:06.939923 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:28:06.942119 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:28:06.944714 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:28:06.953171 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:28:06.966474 dracut-cmdline[221]: dracut-dracut-053 Dec 13 01:28:06.969444 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:28:06.978010 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:28:06.980129 kernel: Bridge firewalling registered Dec 13 01:28:06.979784 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 13 01:28:06.982157 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:28:06.989229 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:28:07.000070 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:28:07.009157 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:28:07.044227 systemd-resolved[277]: Positive Trust Anchors: Dec 13 01:28:07.044249 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:28:07.044295 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:28:07.047503 systemd-resolved[277]: Defaulting to hostname 'linux'. Dec 13 01:28:07.048863 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:28:07.054415 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:28:07.073017 kernel: SCSI subsystem initialized Dec 13 01:28:07.082009 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:28:07.093019 kernel: iscsi: registered transport (tcp) Dec 13 01:28:07.114025 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:28:07.114074 kernel: QLogic iSCSI HBA Driver Dec 13 01:28:07.158056 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:28:07.169166 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:28:07.195824 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:28:07.195890 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:28:07.195910 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:28:07.236016 kernel: raid6: avx2x4 gen() 30249 MB/s Dec 13 01:28:07.253013 kernel: raid6: avx2x2 gen() 30950 MB/s Dec 13 01:28:07.270114 kernel: raid6: avx2x1 gen() 25921 MB/s Dec 13 01:28:07.270135 kernel: raid6: using algorithm avx2x2 gen() 30950 MB/s Dec 13 01:28:07.288283 kernel: raid6: .... xor() 19539 MB/s, rmw enabled Dec 13 01:28:07.288313 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:28:07.309016 kernel: xor: automatically using best checksumming function avx Dec 13 01:28:07.471017 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:28:07.483953 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:28:07.499149 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:28:07.510628 systemd-udevd[416]: Using default interface naming scheme 'v255'. Dec 13 01:28:07.515171 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:28:07.525147 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:28:07.537106 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Dec 13 01:28:07.571941 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:28:07.589248 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:28:07.653139 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:28:07.668206 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:28:07.686324 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:28:07.688962 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:28:07.692868 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:28:07.696201 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:28:07.696356 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:28:07.699024 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 01:28:07.727717 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:28:07.727937 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:28:07.727957 kernel: GPT:9289727 != 19775487 Dec 13 01:28:07.728005 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:28:07.728027 kernel: GPT:9289727 != 19775487 Dec 13 01:28:07.728046 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:28:07.728072 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:28:07.728090 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:28:07.728107 kernel: AES CTR mode by8 optimization enabled Dec 13 01:28:07.713228 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:28:07.723882 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:28:07.724156 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:28:07.727731 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:28:07.730701 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:28:07.741073 kernel: libata version 3.00 loaded. Dec 13 01:28:07.730898 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:28:07.732780 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:28:07.734930 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:28:07.742138 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:28:07.754150 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:28:07.785888 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:28:07.785924 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (476) Dec 13 01:28:07.785937 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:28:07.786187 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:28:07.786407 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (474) Dec 13 01:28:07.786420 kernel: scsi host0: ahci Dec 13 01:28:07.786638 kernel: scsi host1: ahci Dec 13 01:28:07.786786 kernel: scsi host2: ahci Dec 13 01:28:07.787030 kernel: scsi host3: ahci Dec 13 01:28:07.787186 kernel: scsi host4: ahci Dec 13 01:28:07.787326 kernel: scsi host5: ahci Dec 13 01:28:07.787475 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Dec 13 01:28:07.787487 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Dec 13 01:28:07.787498 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Dec 13 01:28:07.787508 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Dec 13 01:28:07.787522 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Dec 13 01:28:07.787533 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Dec 13 01:28:07.765312 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:28:07.772675 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:28:07.795637 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:28:07.802275 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:28:07.805592 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:28:07.821113 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:28:07.823555 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:28:07.823611 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:28:07.826446 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:28:07.830045 disk-uuid[566]: Primary Header is updated. Dec 13 01:28:07.830045 disk-uuid[566]: Secondary Entries is updated. Dec 13 01:28:07.830045 disk-uuid[566]: Secondary Header is updated. Dec 13 01:28:07.833934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:28:07.837137 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:28:07.839002 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:28:07.843010 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:28:07.855468 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:28:07.866196 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:28:07.892685 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:28:08.097297 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:28:08.097380 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:28:08.097394 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:28:08.097410 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:28:08.099007 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:28:08.099035 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:28:08.100084 kernel: ata3.00: applying bridge limits Dec 13 01:28:08.101014 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:28:08.102009 kernel: ata3.00: configured for UDMA/100 Dec 13 01:28:08.102032 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:28:08.149558 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:28:08.161648 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:28:08.161669 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:28:08.844670 disk-uuid[567]: The operation has completed successfully. Dec 13 01:28:08.846712 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:28:08.870170 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:28:08.870365 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:28:08.910250 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:28:08.913573 sh[599]: Success Dec 13 01:28:08.927031 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:28:08.959568 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:28:08.973804 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:28:08.975851 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:28:08.994176 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:28:08.994243 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:28:08.994257 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:28:08.995272 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:28:08.996060 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:28:09.001166 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:28:09.001785 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:28:09.015160 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:28:09.016642 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:28:09.031520 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:28:09.031575 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:28:09.031589 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:28:09.035016 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:28:09.044624 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:28:09.047007 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:28:09.057909 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:28:09.068201 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:28:09.122400 ignition[699]: Ignition 2.19.0 Dec 13 01:28:09.122414 ignition[699]: Stage: fetch-offline Dec 13 01:28:09.122449 ignition[699]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:28:09.122459 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:28:09.122555 ignition[699]: parsed url from cmdline: "" Dec 13 01:28:09.122559 ignition[699]: no config URL provided Dec 13 01:28:09.122564 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:28:09.122573 ignition[699]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:28:09.122600 ignition[699]: op(1): [started] loading QEMU firmware config module Dec 13 01:28:09.122605 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:28:09.132587 ignition[699]: op(1): [finished] loading QEMU firmware config module Dec 13 01:28:09.134145 ignition[699]: parsing config with SHA512: cbb8a2698f57be52f9743dc50b0293ec7a129c5bb750f0846aeb96c36eedcc05060f6c15f0d32d2ccf29dba22ac8735a24c815d6d4d6139aa07d8065357aa320 Dec 13 01:28:09.137065 unknown[699]: fetched base config from "system" Dec 13 01:28:09.137082 unknown[699]: fetched user config from "qemu" Dec 13 01:28:09.137449 ignition[699]: fetch-offline: fetch-offline passed Dec 13 01:28:09.137537 ignition[699]: Ignition finished successfully Dec 13 01:28:09.140312 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:28:09.152809 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:28:09.168381 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:28:09.191244 systemd-networkd[788]: lo: Link UP Dec 13 01:28:09.191255 systemd-networkd[788]: lo: Gained carrier Dec 13 01:28:09.192822 systemd-networkd[788]: Enumeration completed Dec 13 01:28:09.192962 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:28:09.193224 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:28:09.193228 systemd-networkd[788]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:28:09.194471 systemd-networkd[788]: eth0: Link UP Dec 13 01:28:09.194475 systemd-networkd[788]: eth0: Gained carrier Dec 13 01:28:09.194482 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:28:09.194646 systemd[1]: Reached target network.target - Network. Dec 13 01:28:09.196332 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:28:09.203190 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:28:09.320129 systemd-networkd[788]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:28:09.331732 ignition[790]: Ignition 2.19.0 Dec 13 01:28:09.331745 ignition[790]: Stage: kargs Dec 13 01:28:09.331928 ignition[790]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:28:09.331939 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:28:09.332583 ignition[790]: kargs: kargs passed Dec 13 01:28:09.332636 ignition[790]: Ignition finished successfully Dec 13 01:28:09.336219 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:28:09.349200 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:28:09.367388 ignition[799]: Ignition 2.19.0 Dec 13 01:28:09.367399 ignition[799]: Stage: disks Dec 13 01:28:09.367603 ignition[799]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:28:09.367618 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:28:09.368280 ignition[799]: disks: disks passed Dec 13 01:28:09.370748 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:28:09.368324 ignition[799]: Ignition finished successfully Dec 13 01:28:09.372232 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:28:09.373730 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:28:09.375052 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:28:09.376981 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:28:09.377467 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:28:09.388186 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:28:09.403941 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:28:09.411611 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:28:09.416203 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:28:09.540027 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:28:09.540319 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:28:09.542466 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:28:09.556070 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:28:09.558506 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:28:09.560809 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:28:09.560865 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:28:09.560888 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:28:09.571299 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (818) Dec 13 01:28:09.571323 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:28:09.571338 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:28:09.571352 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:28:09.569779 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:28:09.574005 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:28:09.574747 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:28:09.577948 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:28:09.710684 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:28:09.714768 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:28:09.719258 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:28:09.723536 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:28:09.799482 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:28:09.807174 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:28:09.810731 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:28:09.833029 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:28:09.853924 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:28:09.928743 ignition[934]: INFO : Ignition 2.19.0 Dec 13 01:28:09.928743 ignition[934]: INFO : Stage: mount Dec 13 01:28:09.959382 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:28:09.959382 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:28:09.959382 ignition[934]: INFO : mount: mount passed Dec 13 01:28:09.959382 ignition[934]: INFO : Ignition finished successfully Dec 13 01:28:09.964935 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:28:09.972137 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:28:09.993283 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:28:10.012172 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:28:10.020021 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (943) Dec 13 01:28:10.022223 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:28:10.022237 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:28:10.022248 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:28:10.026008 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:28:10.026903 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:28:10.056686 ignition[960]: INFO : Ignition 2.19.0 Dec 13 01:28:10.056686 ignition[960]: INFO : Stage: files Dec 13 01:28:10.058827 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:28:10.058827 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:28:10.058827 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:28:10.062739 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:28:10.062739 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:28:10.076596 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:28:10.097137 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:28:10.097137 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:28:10.097137 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:28:10.077505 unknown[960]: wrote ssh authorized keys file for user: core Dec 13 01:28:10.103603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:28:10.103603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:28:10.103603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:28:10.103603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:28:10.103603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:28:10.103603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:28:10.103603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:28:10.462148 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 01:28:11.025309 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:28:11.025309 ignition[960]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Dec 13 01:28:11.029239 ignition[960]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:28:11.032594 ignition[960]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:28:11.032594 ignition[960]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Dec 13 01:28:11.032594 ignition[960]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:28:11.061737 ignition[960]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:28:11.066634 ignition[960]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:28:11.068340 ignition[960]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:28:11.069913 ignition[960]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:28:11.071687 ignition[960]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:28:11.073431 ignition[960]: INFO : files: files passed Dec 13 01:28:11.073431 ignition[960]: INFO : Ignition finished successfully Dec 13 01:28:11.076608 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:28:11.088153 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:28:11.097003 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:28:11.099944 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:28:11.101044 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:28:11.109156 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:28:11.114174 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:28:11.114174 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:28:11.118109 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:28:11.121304 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:28:11.124915 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:28:11.139370 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:28:11.167045 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:28:11.167240 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:28:11.168092 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:28:11.171396 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:28:11.171777 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:28:11.173143 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:28:11.202132 systemd-networkd[788]: eth0: Gained IPv6LL Dec 13 01:28:11.204752 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:28:11.213850 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:28:11.224728 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:28:11.225333 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:28:11.225729 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:28:11.226340 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:28:11.226445 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:28:11.234267 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:28:11.236700 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:28:11.238873 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:28:11.239488 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:28:11.239883 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:28:11.245387 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:28:11.267690 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:28:11.270385 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:28:11.271058 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:28:11.271627 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:28:11.272017 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:28:11.272122 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:28:11.273035 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:28:11.273830 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:28:11.274393 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:28:11.285489 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:28:11.286183 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:28:11.286286 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:28:11.286960 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:28:11.287076 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:28:11.287436 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:28:11.287745 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:28:11.300085 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:28:11.300501 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:28:11.303594 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:28:11.305539 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:28:11.305654 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:28:11.307555 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:28:11.307643 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:28:11.309916 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:28:11.310139 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:28:11.330322 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:28:11.330439 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:28:11.347217 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:28:11.349587 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:28:11.350676 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:28:11.350846 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:28:11.352432 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:28:11.352559 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:28:11.360184 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:28:11.360371 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:28:11.367049 ignition[1014]: INFO : Ignition 2.19.0 Dec 13 01:28:11.367049 ignition[1014]: INFO : Stage: umount Dec 13 01:28:11.368815 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:28:11.368815 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:28:11.368815 ignition[1014]: INFO : umount: umount passed Dec 13 01:28:11.368815 ignition[1014]: INFO : Ignition finished successfully Dec 13 01:28:11.373119 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:28:11.374388 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:28:11.377532 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:28:11.379388 systemd[1]: Stopped target network.target - Network. Dec 13 01:28:11.381230 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:28:11.381296 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:28:11.384305 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:28:11.385392 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:28:11.387542 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:28:11.387597 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:28:11.390429 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:28:11.391418 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:28:11.393755 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:28:11.396059 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:28:11.403041 systemd-networkd[788]: eth0: DHCPv6 lease lost Dec 13 01:28:11.405061 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:28:11.405212 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:28:11.405960 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:28:11.406018 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:28:11.420116 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:28:11.432327 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:28:11.432438 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:28:11.434548 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:28:11.437167 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:28:11.437315 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:28:11.444128 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:28:11.444228 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:28:11.444909 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:28:11.444965 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:28:11.447645 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:28:11.447706 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:28:11.452852 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:28:11.452971 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:28:11.496763 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:28:11.496965 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:28:11.497743 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:28:11.497793 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:28:11.500683 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:28:11.500722 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:28:11.501233 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:28:11.501286 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:28:11.502059 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:28:11.502109 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:28:11.554468 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:28:11.554572 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:28:11.585321 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:28:11.588022 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:28:11.589295 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:28:11.592251 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:28:11.593590 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:28:11.596364 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:28:11.596420 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:28:11.599754 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:28:11.600773 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:28:11.603552 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:28:11.604715 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:28:11.676870 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:28:11.678091 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:28:11.680557 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:28:11.683312 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:28:11.684498 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:28:11.700205 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:28:11.710435 systemd[1]: Switching root. Dec 13 01:28:11.741168 systemd-journald[193]: Journal stopped Dec 13 01:28:13.129246 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Dec 13 01:28:13.129323 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:28:13.129343 kernel: SELinux: policy capability open_perms=1 Dec 13 01:28:13.129354 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:28:13.129372 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:28:13.129389 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:28:13.129400 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:28:13.129417 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:28:13.129429 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:28:13.129443 kernel: audit: type=1403 audit(1734053292.132:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:28:13.129455 systemd[1]: Successfully loaded SELinux policy in 45.472ms. Dec 13 01:28:13.129480 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.442ms. Dec 13 01:28:13.129493 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:28:13.129507 systemd[1]: Detected virtualization kvm. Dec 13 01:28:13.129519 systemd[1]: Detected architecture x86-64. Dec 13 01:28:13.129533 systemd[1]: Detected first boot. Dec 13 01:28:13.129545 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:28:13.129557 zram_generator::config[1058]: No configuration found. Dec 13 01:28:13.129571 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:28:13.129582 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:28:13.129594 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:28:13.129606 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:28:13.129619 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:28:13.129635 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:28:13.129647 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:28:13.129659 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:28:13.129671 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:28:13.129682 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:28:13.129694 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:28:13.129706 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:28:13.129718 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:28:13.129732 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:28:13.129750 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:28:13.129769 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:28:13.129781 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:28:13.129795 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:28:13.129812 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:28:13.129829 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:28:13.129846 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:28:13.129859 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:28:13.129875 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:28:13.129887 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:28:13.129899 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:28:13.129910 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:28:13.129924 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:28:13.129936 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:28:13.129948 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:28:13.129960 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:28:13.129974 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:28:13.130005 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:28:13.130017 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:28:13.130029 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:28:13.130049 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:28:13.130062 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:28:13.130074 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:28:13.130086 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:28:13.130098 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:28:13.130113 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:28:13.130126 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:28:13.130138 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:28:13.130150 systemd[1]: Reached target machines.target - Containers. Dec 13 01:28:13.130161 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:28:13.130174 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:28:13.130186 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:28:13.130198 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:28:13.130213 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:28:13.130224 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:28:13.130236 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:28:13.130248 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:28:13.130260 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:28:13.130272 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:28:13.130284 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:28:13.130296 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:28:13.130308 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:28:13.130322 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:28:13.130333 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:28:13.130345 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:28:13.130357 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:28:13.130369 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:28:13.130381 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:28:13.130393 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:28:13.130404 systemd[1]: Stopped verity-setup.service. Dec 13 01:28:13.130417 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:28:13.130434 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:28:13.130446 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:28:13.130458 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:28:13.130489 systemd-journald[1121]: Collecting audit messages is disabled. Dec 13 01:28:13.130513 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:28:13.130525 kernel: fuse: init (API version 7.39) Dec 13 01:28:13.130537 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:28:13.130548 systemd-journald[1121]: Journal started Dec 13 01:28:13.130569 systemd-journald[1121]: Runtime Journal (/run/log/journal/7ae0aea8e73c42cbacc8d88703f9824b) is 6.0M, max 48.3M, 42.2M free. Dec 13 01:28:12.828517 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:28:12.847910 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:28:12.848404 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:28:13.134281 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:28:13.134874 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:28:13.136657 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:28:13.137018 kernel: loop: module loaded Dec 13 01:28:13.138543 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:28:13.138725 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:28:13.140331 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:28:13.140556 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:28:13.142347 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:28:13.142684 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:28:13.144607 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:28:13.144824 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:28:13.146667 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:28:13.146932 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:28:13.148690 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:28:13.150282 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:28:13.152212 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:28:13.167926 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:28:13.178030 kernel: ACPI: bus type drm_connector registered Dec 13 01:28:13.179225 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:28:13.183586 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:28:13.184859 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:28:13.184953 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:28:13.187066 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:28:13.189697 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:28:13.192086 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:28:13.193429 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:28:13.203182 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:28:13.207763 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:28:13.209059 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:28:13.212131 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:28:13.213563 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:28:13.216903 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:28:13.219611 systemd-journald[1121]: Time spent on flushing to /var/log/journal/7ae0aea8e73c42cbacc8d88703f9824b is 15.850ms for 976 entries. Dec 13 01:28:13.219611 systemd-journald[1121]: System Journal (/var/log/journal/7ae0aea8e73c42cbacc8d88703f9824b) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:28:13.265612 systemd-journald[1121]: Received client request to flush runtime journal. Dec 13 01:28:13.266510 kernel: loop0: detected capacity change from 0 to 140768 Dec 13 01:28:13.220856 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:28:13.226272 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:28:13.230162 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:28:13.231747 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:28:13.231919 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:28:13.233303 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:28:13.234884 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:28:13.237675 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:28:13.246172 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:28:13.249058 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:28:13.261942 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:28:13.272243 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:28:13.275663 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:28:13.277767 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:28:13.282005 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:28:13.291508 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:28:13.298285 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:28:13.339041 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Dec 13 01:28:13.339067 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Dec 13 01:28:13.340024 kernel: loop1: detected capacity change from 0 to 142488 Dec 13 01:28:13.346538 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:28:13.361271 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:28:13.380027 kernel: loop2: detected capacity change from 0 to 211296 Dec 13 01:28:13.397614 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:28:13.443163 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:28:13.516073 kernel: loop3: detected capacity change from 0 to 140768 Dec 13 01:28:13.659022 kernel: loop4: detected capacity change from 0 to 142488 Dec 13 01:28:13.659917 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Dec 13 01:28:13.660323 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Dec 13 01:28:13.667763 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:28:13.677041 kernel: loop5: detected capacity change from 0 to 211296 Dec 13 01:28:13.802011 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:28:13.802665 (sd-merge)[1196]: Merged extensions into '/usr'. Dec 13 01:28:13.807541 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:28:13.807561 systemd[1]: Reloading... Dec 13 01:28:13.965040 zram_generator::config[1231]: No configuration found. Dec 13 01:28:14.135955 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:28:14.196655 systemd[1]: Reloading finished in 388 ms. Dec 13 01:28:14.241007 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:28:14.252617 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:28:14.254205 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:28:14.261485 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:28:14.278350 systemd[1]: Starting ensure-sysext.service... Dec 13 01:28:14.290973 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:28:14.305124 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:28:14.305156 systemd[1]: Reloading... Dec 13 01:28:14.333463 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:28:14.333835 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:28:14.334825 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:28:14.335142 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Dec 13 01:28:14.335217 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Dec 13 01:28:14.338648 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:28:14.338663 systemd-tmpfiles[1263]: Skipping /boot Dec 13 01:28:14.355815 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:28:14.355834 systemd-tmpfiles[1263]: Skipping /boot Dec 13 01:28:14.399024 zram_generator::config[1296]: No configuration found. Dec 13 01:28:14.523861 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:28:14.576631 systemd[1]: Reloading finished in 271 ms. Dec 13 01:28:14.596133 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:28:14.598042 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:28:14.614409 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:28:14.627772 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:28:14.631665 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:28:14.635701 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:28:14.641515 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:28:14.648233 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:28:14.648422 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:28:14.650124 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:28:14.659179 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:28:14.665393 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:28:14.673975 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:28:14.677168 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:28:14.678570 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:28:14.680594 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:28:14.680870 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:28:14.683424 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:28:14.683740 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:28:14.716655 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:28:14.719467 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:28:14.722002 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:28:14.722258 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:28:14.723438 augenrules[1353]: No rules Dec 13 01:28:14.724908 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:28:14.736971 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:28:14.737214 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:28:14.743537 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:28:14.747542 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:28:14.750959 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:28:14.785100 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:28:14.799376 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:28:14.804416 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:28:14.805552 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:28:14.806804 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:28:14.808917 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:28:14.811044 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:28:14.813059 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:28:14.813372 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:28:14.815427 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:28:14.815753 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:28:14.817730 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:28:14.818090 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:28:14.829794 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:28:14.834899 systemd-udevd[1367]: Using default interface naming scheme 'v255'. Dec 13 01:28:14.835532 systemd[1]: Finished ensure-sysext.service. Dec 13 01:28:14.838976 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:28:14.839249 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:28:14.845289 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:28:14.851275 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:28:14.856280 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:28:14.861660 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:28:14.863676 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:28:14.870234 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:28:14.871439 systemd-resolved[1339]: Positive Trust Anchors: Dec 13 01:28:14.871443 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:28:14.871460 systemd-resolved[1339]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:28:14.871480 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:28:14.871494 systemd-resolved[1339]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:28:14.871843 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:28:14.873635 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:28:14.873859 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:28:14.875391 systemd-resolved[1339]: Defaulting to hostname 'linux'. Dec 13 01:28:14.875639 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:28:14.875861 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:28:14.877363 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:28:14.877535 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:28:14.879153 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:28:14.880964 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:28:14.881357 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:28:14.890946 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:28:14.904364 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:28:14.905678 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:28:14.905828 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:28:14.927009 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1391) Dec 13 01:28:15.044026 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1391) Dec 13 01:28:15.076769 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:28:15.079692 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:28:15.079756 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:28:15.097020 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1399) Dec 13 01:28:15.174031 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:28:15.179759 systemd-networkd[1407]: lo: Link UP Dec 13 01:28:15.180310 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:28:15.179774 systemd-networkd[1407]: lo: Gained carrier Dec 13 01:28:15.182656 systemd-networkd[1407]: Enumeration completed Dec 13 01:28:15.182806 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:28:15.183285 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:28:15.183297 systemd-networkd[1407]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:28:15.184589 systemd[1]: Reached target network.target - Network. Dec 13 01:28:15.184626 systemd-networkd[1407]: eth0: Link UP Dec 13 01:28:15.184633 systemd-networkd[1407]: eth0: Gained carrier Dec 13 01:28:15.184651 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:28:15.192233 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:28:15.200163 systemd-networkd[1407]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:28:15.201470 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Dec 13 01:28:16.212797 systemd-resolved[1339]: Clock change detected. Flushing caches. Dec 13 01:28:16.213064 systemd-timesyncd[1388]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:28:16.213182 systemd-timesyncd[1388]: Initial clock synchronization to Fri 2024-12-13 01:28:16.212699 UTC. Dec 13 01:28:16.224548 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:28:16.224655 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 01:28:16.228406 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:28:16.228711 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:28:16.228984 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:28:16.239768 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:28:16.246789 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:28:16.268139 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:28:16.285861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:28:16.376525 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:28:16.379146 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:28:16.380659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:28:16.395025 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:28:16.471017 kernel: kvm_amd: TSC scaling supported Dec 13 01:28:16.471108 kernel: kvm_amd: Nested Virtualization enabled Dec 13 01:28:16.471133 kernel: kvm_amd: Nested Paging enabled Dec 13 01:28:16.472067 kernel: kvm_amd: LBR virtualization supported Dec 13 01:28:16.472106 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 01:28:16.472704 kernel: kvm_amd: Virtual GIF supported Dec 13 01:28:16.497525 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:28:16.512531 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:28:16.531385 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:28:16.545922 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:28:16.554918 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:28:16.587401 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:28:16.589959 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:28:16.591347 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:28:16.592734 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:28:16.594289 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:28:16.595999 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:28:16.597380 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:28:16.598931 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:28:16.600465 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:28:16.600646 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:28:16.601792 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:28:16.603919 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:28:16.607070 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:28:16.621360 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:28:16.624471 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:28:16.626826 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:28:16.632648 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:28:16.633970 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:28:16.635147 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:28:16.636319 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:28:16.636359 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:28:16.637800 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:28:16.640312 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:28:16.642657 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:28:16.646439 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:28:16.650718 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:28:16.654664 jq[1445]: false Dec 13 01:28:16.653330 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:28:16.664367 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:28:16.671733 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:28:16.671752 dbus-daemon[1444]: [system] SELinux support is enabled Dec 13 01:28:16.676505 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:28:16.681969 extend-filesystems[1446]: Found loop3 Dec 13 01:28:16.682826 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:28:16.684734 extend-filesystems[1446]: Found loop4 Dec 13 01:28:16.684734 extend-filesystems[1446]: Found loop5 Dec 13 01:28:16.687616 extend-filesystems[1446]: Found sr0 Dec 13 01:28:16.687616 extend-filesystems[1446]: Found vda Dec 13 01:28:16.687616 extend-filesystems[1446]: Found vda1 Dec 13 01:28:16.687616 extend-filesystems[1446]: Found vda2 Dec 13 01:28:16.687616 extend-filesystems[1446]: Found vda3 Dec 13 01:28:16.687616 extend-filesystems[1446]: Found usr Dec 13 01:28:16.687616 extend-filesystems[1446]: Found vda4 Dec 13 01:28:16.687616 extend-filesystems[1446]: Found vda6 Dec 13 01:28:16.687616 extend-filesystems[1446]: Found vda7 Dec 13 01:28:16.687616 extend-filesystems[1446]: Found vda9 Dec 13 01:28:16.687616 extend-filesystems[1446]: Checking size of /dev/vda9 Dec 13 01:28:16.686707 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:28:16.687366 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:28:16.689728 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:28:16.692690 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:28:16.694785 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:28:16.715009 jq[1460]: true Dec 13 01:28:16.699083 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:28:16.723826 update_engine[1458]: I20241213 01:28:16.716818 1458 main.cc:92] Flatcar Update Engine starting Dec 13 01:28:16.723826 update_engine[1458]: I20241213 01:28:16.719564 1458 update_check_scheduler.cc:74] Next update check in 11m33s Dec 13 01:28:16.706000 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:28:16.706307 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:28:16.706755 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:28:16.707018 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:28:16.711228 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:28:16.711563 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:28:16.738440 jq[1465]: true Dec 13 01:28:16.739886 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:28:16.743073 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:28:16.743626 extend-filesystems[1446]: Resized partition /dev/vda9 Dec 13 01:28:16.751634 extend-filesystems[1478]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:28:16.744711 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:28:16.746615 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:28:16.746636 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:28:16.750443 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:28:16.759921 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1393) Dec 13 01:28:16.761404 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:28:16.796951 systemd-logind[1454]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:28:16.796988 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:28:16.889815 systemd-logind[1454]: New seat seat0. Dec 13 01:28:16.893686 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:28:16.903599 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:28:16.912512 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:28:17.084109 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:28:17.103521 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:28:17.122005 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:28:17.131256 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:28:17.136551 systemd[1]: Started sshd@0-10.0.0.53:22-10.0.0.1:55926.service - OpenSSH per-connection server daemon (10.0.0.1:55926). Dec 13 01:28:17.146426 extend-filesystems[1478]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:28:17.146426 extend-filesystems[1478]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:28:17.146426 extend-filesystems[1478]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:28:17.152604 extend-filesystems[1446]: Resized filesystem in /dev/vda9 Dec 13 01:28:17.147358 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:28:17.148584 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:28:17.157898 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:28:17.155311 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:28:17.155639 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:28:17.159051 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:28:17.171370 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:28:17.201154 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:28:17.226909 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:28:17.242830 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:28:17.248855 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:28:17.250827 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:28:17.330655 sshd[1508]: Accepted publickey for core from 10.0.0.1 port 55926 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:17.336385 sshd[1508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:17.361294 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:28:17.419712 systemd-networkd[1407]: eth0: Gained IPv6LL Dec 13 01:28:17.421982 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:28:17.427963 systemd-logind[1454]: New session 1 of user core. Dec 13 01:28:17.428665 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:28:17.433846 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:28:17.445191 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:28:17.452713 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:28:17.458921 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:28:17.473007 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:28:17.498606 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:28:17.502283 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:28:17.502749 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:28:17.509951 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:28:17.520601 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:28:17.526643 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:28:17.665277 containerd[1466]: time="2024-12-13T01:28:17.663537033Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:28:17.711221 containerd[1466]: time="2024-12-13T01:28:17.710846635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:28:17.713475 containerd[1466]: time="2024-12-13T01:28:17.713419179Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:28:17.713611 containerd[1466]: time="2024-12-13T01:28:17.713588657Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:28:17.713691 containerd[1466]: time="2024-12-13T01:28:17.713672044Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:28:17.714408 containerd[1466]: time="2024-12-13T01:28:17.714098924Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:28:17.714408 containerd[1466]: time="2024-12-13T01:28:17.714131776Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:28:17.714408 containerd[1466]: time="2024-12-13T01:28:17.714271819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:28:17.714408 containerd[1466]: time="2024-12-13T01:28:17.714294571Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:28:17.714853 containerd[1466]: time="2024-12-13T01:28:17.714823102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:28:17.714935 containerd[1466]: time="2024-12-13T01:28:17.714914263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:28:17.715026 containerd[1466]: time="2024-12-13T01:28:17.715002369Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:28:17.715101 containerd[1466]: time="2024-12-13T01:28:17.715080816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:28:17.715358 containerd[1466]: time="2024-12-13T01:28:17.715333359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:28:17.715853 containerd[1466]: time="2024-12-13T01:28:17.715826574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:28:17.716508 containerd[1466]: time="2024-12-13T01:28:17.716094707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:28:17.716508 containerd[1466]: time="2024-12-13T01:28:17.716122128Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:28:17.716508 containerd[1466]: time="2024-12-13T01:28:17.716293159Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:28:17.716508 containerd[1466]: time="2024-12-13T01:28:17.716390311Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:28:17.788713 systemd[1539]: Queued start job for default target default.target. Dec 13 01:28:17.805668 systemd[1539]: Created slice app.slice - User Application Slice. Dec 13 01:28:17.805705 systemd[1539]: Reached target paths.target - Paths. Dec 13 01:28:17.805723 systemd[1539]: Reached target timers.target - Timers. Dec 13 01:28:17.808096 systemd[1539]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:28:17.835691 systemd[1539]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:28:17.835861 systemd[1539]: Reached target sockets.target - Sockets. Dec 13 01:28:17.835884 systemd[1539]: Reached target basic.target - Basic System. Dec 13 01:28:17.835940 systemd[1539]: Reached target default.target - Main User Target. Dec 13 01:28:17.835990 systemd[1539]: Startup finished in 297ms. Dec 13 01:28:17.836512 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:28:17.850802 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:28:17.930840 systemd[1]: Started sshd@1-10.0.0.53:22-10.0.0.1:51378.service - OpenSSH per-connection server daemon (10.0.0.1:51378). Dec 13 01:28:17.977654 containerd[1466]: time="2024-12-13T01:28:17.977470781Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:28:17.977654 containerd[1466]: time="2024-12-13T01:28:17.977630811Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:28:17.977654 containerd[1466]: time="2024-12-13T01:28:17.977661598Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:28:17.977847 containerd[1466]: time="2024-12-13T01:28:17.977683630Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:28:17.977847 containerd[1466]: time="2024-12-13T01:28:17.977712053Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:28:17.978206 containerd[1466]: time="2024-12-13T01:28:17.978156897Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:28:17.978790 containerd[1466]: time="2024-12-13T01:28:17.978698543Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:28:17.979033 containerd[1466]: time="2024-12-13T01:28:17.979002683Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:28:17.979100 containerd[1466]: time="2024-12-13T01:28:17.979046055Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:28:17.979100 containerd[1466]: time="2024-12-13T01:28:17.979071172Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:28:17.979100 containerd[1466]: time="2024-12-13T01:28:17.979092301Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:28:17.979216 containerd[1466]: time="2024-12-13T01:28:17.979110896Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:28:17.979216 containerd[1466]: time="2024-12-13T01:28:17.979134030Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:28:17.979216 containerd[1466]: time="2024-12-13T01:28:17.979166781Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:28:17.979216 containerd[1466]: time="2024-12-13T01:28:17.979196457Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:28:17.979385 containerd[1466]: time="2024-12-13T01:28:17.979219991Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:28:17.979385 containerd[1466]: time="2024-12-13T01:28:17.979238355Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:28:17.979385 containerd[1466]: time="2024-12-13T01:28:17.979257601Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:28:17.979385 containerd[1466]: time="2024-12-13T01:28:17.979293148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.979385 containerd[1466]: time="2024-12-13T01:28:17.979315290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.979385 containerd[1466]: time="2024-12-13T01:28:17.979333804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.979385 containerd[1466]: time="2024-12-13T01:28:17.979359162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.979385 containerd[1466]: time="2024-12-13T01:28:17.979381594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.979385 containerd[1466]: time="2024-12-13T01:28:17.979403665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.979769 containerd[1466]: time="2024-12-13T01:28:17.979420868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.979769 containerd[1466]: time="2024-12-13T01:28:17.979442388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.979769 containerd[1466]: time="2024-12-13T01:28:17.979460081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.979769 containerd[1466]: time="2024-12-13T01:28:17.979517549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.979769 containerd[1466]: time="2024-12-13T01:28:17.979536675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.979769 containerd[1466]: time="2024-12-13T01:28:17.979557774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.979769 containerd[1466]: time="2024-12-13T01:28:17.979576840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.979769 containerd[1466]: time="2024-12-13T01:28:17.979600174Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:28:17.979769 containerd[1466]: time="2024-12-13T01:28:17.979647342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.979769 containerd[1466]: time="2024-12-13T01:28:17.979667520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.979769 containerd[1466]: time="2024-12-13T01:28:17.979683670Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:28:17.979769 containerd[1466]: time="2024-12-13T01:28:17.979774491Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:28:17.980213 containerd[1466]: time="2024-12-13T01:28:17.979804758Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:28:17.980213 containerd[1466]: time="2024-12-13T01:28:17.979820878Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:28:17.980213 containerd[1466]: time="2024-12-13T01:28:17.979837679Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:28:17.980213 containerd[1466]: time="2024-12-13T01:28:17.979851014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.980213 containerd[1466]: time="2024-12-13T01:28:17.979871342Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:28:17.980213 containerd[1466]: time="2024-12-13T01:28:17.979894275Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:28:17.980213 containerd[1466]: time="2024-12-13T01:28:17.979915034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:28:17.980465 containerd[1466]: time="2024-12-13T01:28:17.980373174Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:28:17.980805 containerd[1466]: time="2024-12-13T01:28:17.980470065Z" level=info msg="Connect containerd service" Dec 13 01:28:17.980805 containerd[1466]: time="2024-12-13T01:28:17.980560705Z" level=info msg="using legacy CRI server" Dec 13 01:28:17.980805 containerd[1466]: time="2024-12-13T01:28:17.980574772Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:28:17.980920 containerd[1466]: time="2024-12-13T01:28:17.980867851Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:28:17.982079 containerd[1466]: time="2024-12-13T01:28:17.982008981Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:28:17.985387 containerd[1466]: time="2024-12-13T01:28:17.982463253Z" level=info msg="Start subscribing containerd event" Dec 13 01:28:17.985387 containerd[1466]: time="2024-12-13T01:28:17.982598797Z" level=info msg="Start recovering state" Dec 13 01:28:17.985387 containerd[1466]: time="2024-12-13T01:28:17.982716969Z" level=info msg="Start event monitor" Dec 13 01:28:17.985387 containerd[1466]: time="2024-12-13T01:28:17.982743659Z" level=info msg="Start snapshots syncer" Dec 13 01:28:17.985387 containerd[1466]: time="2024-12-13T01:28:17.982764789Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:28:17.985387 containerd[1466]: time="2024-12-13T01:28:17.982778615Z" level=info msg="Start streaming server" Dec 13 01:28:17.985387 containerd[1466]: time="2024-12-13T01:28:17.983247734Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:28:17.985387 containerd[1466]: time="2024-12-13T01:28:17.983322585Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:28:17.985387 containerd[1466]: time="2024-12-13T01:28:17.984512095Z" level=info msg="containerd successfully booted in 0.324978s" Dec 13 01:28:17.983557 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:28:18.006672 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 51378 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:18.009973 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:18.029303 systemd-logind[1454]: New session 2 of user core. Dec 13 01:28:18.131959 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:28:18.231882 sshd[1558]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:18.269529 systemd[1]: sshd@1-10.0.0.53:22-10.0.0.1:51378.service: Deactivated successfully. Dec 13 01:28:18.315235 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:28:18.327002 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:28:18.345007 systemd[1]: Started sshd@2-10.0.0.53:22-10.0.0.1:51380.service - OpenSSH per-connection server daemon (10.0.0.1:51380). Dec 13 01:28:18.361555 systemd-logind[1454]: Removed session 2. Dec 13 01:28:18.413225 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 51380 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:18.468027 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:18.485938 systemd-logind[1454]: New session 3 of user core. Dec 13 01:28:18.497822 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:28:18.591866 sshd[1566]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:18.602856 systemd[1]: sshd@2-10.0.0.53:22-10.0.0.1:51380.service: Deactivated successfully. Dec 13 01:28:18.607499 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:28:18.611788 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:28:18.613867 systemd-logind[1454]: Removed session 3. Dec 13 01:28:20.328709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:28:20.336364 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:28:20.337410 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:28:20.339920 systemd[1]: Startup finished in 718ms (kernel) + 5.410s (initrd) + 7.240s (userspace) = 13.369s. Dec 13 01:28:21.471438 kubelet[1576]: E1213 01:28:21.471323 1576 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:28:21.478445 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:28:21.478791 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:28:21.480574 systemd[1]: kubelet.service: Consumed 2.621s CPU time. Dec 13 01:28:28.604233 systemd[1]: Started sshd@3-10.0.0.53:22-10.0.0.1:35552.service - OpenSSH per-connection server daemon (10.0.0.1:35552). Dec 13 01:28:28.642235 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 35552 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:28.644033 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:28.648239 systemd-logind[1454]: New session 4 of user core. Dec 13 01:28:28.657748 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:28:28.715297 sshd[1593]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:28.727153 systemd[1]: sshd@3-10.0.0.53:22-10.0.0.1:35552.service: Deactivated successfully. Dec 13 01:28:28.728981 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:28:28.730688 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:28:28.732163 systemd[1]: Started sshd@4-10.0.0.53:22-10.0.0.1:35562.service - OpenSSH per-connection server daemon (10.0.0.1:35562). Dec 13 01:28:28.733170 systemd-logind[1454]: Removed session 4. Dec 13 01:28:28.773372 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 35562 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:28.775431 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:28.780875 systemd-logind[1454]: New session 5 of user core. Dec 13 01:28:28.800762 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:28:28.852557 sshd[1600]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:28.865273 systemd[1]: sshd@4-10.0.0.53:22-10.0.0.1:35562.service: Deactivated successfully. Dec 13 01:28:28.866865 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:28:28.868451 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:28:28.878801 systemd[1]: Started sshd@5-10.0.0.53:22-10.0.0.1:35572.service - OpenSSH per-connection server daemon (10.0.0.1:35572). Dec 13 01:28:28.879859 systemd-logind[1454]: Removed session 5. Dec 13 01:28:28.913459 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 35572 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:28.915544 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:28.920367 systemd-logind[1454]: New session 6 of user core. Dec 13 01:28:28.929702 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:28:28.988517 sshd[1607]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:29.004530 systemd[1]: sshd@5-10.0.0.53:22-10.0.0.1:35572.service: Deactivated successfully. Dec 13 01:28:29.007265 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:28:29.009679 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:28:29.019056 systemd[1]: Started sshd@6-10.0.0.53:22-10.0.0.1:35576.service - OpenSSH per-connection server daemon (10.0.0.1:35576). Dec 13 01:28:29.020562 systemd-logind[1454]: Removed session 6. Dec 13 01:28:29.057385 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 35576 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:29.059725 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:29.065016 systemd-logind[1454]: New session 7 of user core. Dec 13 01:28:29.077808 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:28:29.140822 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:28:29.141340 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:28:29.159603 sudo[1617]: pam_unix(sudo:session): session closed for user root Dec 13 01:28:29.162301 sshd[1614]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:29.176350 systemd[1]: sshd@6-10.0.0.53:22-10.0.0.1:35576.service: Deactivated successfully. Dec 13 01:28:29.179154 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:28:29.181541 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:28:29.196982 systemd[1]: Started sshd@7-10.0.0.53:22-10.0.0.1:35592.service - OpenSSH per-connection server daemon (10.0.0.1:35592). Dec 13 01:28:29.198668 systemd-logind[1454]: Removed session 7. Dec 13 01:28:29.229863 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 35592 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:29.231612 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:29.236423 systemd-logind[1454]: New session 8 of user core. Dec 13 01:28:29.246753 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:28:29.301888 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:28:29.302293 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:28:29.307224 sudo[1626]: pam_unix(sudo:session): session closed for user root Dec 13 01:28:29.313654 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:28:29.314066 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:28:29.340815 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:28:29.342717 auditctl[1629]: No rules Dec 13 01:28:29.343989 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:28:29.344313 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:28:29.346284 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:28:29.377635 augenrules[1647]: No rules Dec 13 01:28:29.378952 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:28:29.380347 sudo[1625]: pam_unix(sudo:session): session closed for user root Dec 13 01:28:29.382472 sshd[1622]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:29.395814 systemd[1]: sshd@7-10.0.0.53:22-10.0.0.1:35592.service: Deactivated successfully. Dec 13 01:28:29.397909 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:28:29.399449 systemd-logind[1454]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:28:29.409858 systemd[1]: Started sshd@8-10.0.0.53:22-10.0.0.1:35594.service - OpenSSH per-connection server daemon (10.0.0.1:35594). Dec 13 01:28:29.410919 systemd-logind[1454]: Removed session 8. Dec 13 01:28:29.446034 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 35594 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:29.447846 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:29.453124 systemd-logind[1454]: New session 9 of user core. Dec 13 01:28:29.462760 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:28:29.517780 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:28:29.518211 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:28:29.542959 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:28:29.569036 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:28:29.569405 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:28:30.734979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:28:30.735151 systemd[1]: kubelet.service: Consumed 2.621s CPU time. Dec 13 01:28:30.744898 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:28:30.763640 systemd[1]: Reloading requested from client PID 1706 ('systemctl') (unit session-9.scope)... Dec 13 01:28:30.763665 systemd[1]: Reloading... Dec 13 01:28:30.864601 zram_generator::config[1746]: No configuration found. Dec 13 01:28:31.403304 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:28:31.518636 systemd[1]: Reloading finished in 754 ms. Dec 13 01:28:31.587431 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:28:31.587658 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:28:31.588165 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:28:31.590728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:28:31.789642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:28:31.796737 (kubelet)[1792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:28:31.973525 kubelet[1792]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:28:31.973525 kubelet[1792]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:28:31.973525 kubelet[1792]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:28:31.973942 kubelet[1792]: I1213 01:28:31.973534 1792 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:28:32.676729 kubelet[1792]: I1213 01:28:32.676659 1792 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:28:32.676729 kubelet[1792]: I1213 01:28:32.676711 1792 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:28:32.677120 kubelet[1792]: I1213 01:28:32.677084 1792 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:28:32.695169 kubelet[1792]: I1213 01:28:32.695120 1792 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:28:32.710777 kubelet[1792]: I1213 01:28:32.710712 1792 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:28:32.711037 kubelet[1792]: I1213 01:28:32.711012 1792 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:28:32.711367 kubelet[1792]: I1213 01:28:32.711243 1792 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:28:32.712093 kubelet[1792]: I1213 01:28:32.712065 1792 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:28:32.712120 kubelet[1792]: I1213 01:28:32.712098 1792 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:28:32.712286 kubelet[1792]: I1213 01:28:32.712265 1792 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:28:32.712445 kubelet[1792]: I1213 01:28:32.712426 1792 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:28:32.712465 kubelet[1792]: I1213 01:28:32.712454 1792 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:28:32.712549 kubelet[1792]: I1213 01:28:32.712528 1792 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:28:32.712584 kubelet[1792]: I1213 01:28:32.712562 1792 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:28:32.712806 kubelet[1792]: E1213 01:28:32.712714 1792 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:32.712806 kubelet[1792]: E1213 01:28:32.712769 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:32.714764 kubelet[1792]: I1213 01:28:32.714717 1792 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:28:32.719010 kubelet[1792]: W1213 01:28:32.718692 1792 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 01:28:32.719010 kubelet[1792]: E1213 01:28:32.718765 1792 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 01:28:32.719010 kubelet[1792]: I1213 01:28:32.718808 1792 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:28:32.719010 kubelet[1792]: W1213 01:28:32.718920 1792 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:28:32.719010 kubelet[1792]: W1213 01:28:32.718906 1792 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.0.0.53" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 01:28:32.719010 kubelet[1792]: E1213 01:28:32.718971 1792 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.53" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 01:28:32.720414 kubelet[1792]: I1213 01:28:32.720164 1792 server.go:1256] "Started kubelet" Dec 13 01:28:32.720414 kubelet[1792]: I1213 01:28:32.720253 1792 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:28:32.720414 kubelet[1792]: I1213 01:28:32.720265 1792 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:28:32.721237 kubelet[1792]: I1213 01:28:32.720696 1792 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:28:32.724912 kubelet[1792]: I1213 01:28:32.724860 1792 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:28:32.728885 kubelet[1792]: I1213 01:28:32.728827 1792 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:28:32.729210 kubelet[1792]: E1213 01:28:32.729175 1792 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:28:32.730508 kubelet[1792]: I1213 01:28:32.730410 1792 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:28:32.730643 kubelet[1792]: I1213 01:28:32.730610 1792 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:28:32.730795 kubelet[1792]: I1213 01:28:32.730718 1792 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:28:32.731860 kubelet[1792]: E1213 01:28:32.731565 1792 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.53.1810984cee45eaee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.53,UID:10.0.0.53,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.53,},FirstTimestamp:2024-12-13 01:28:32.720136942 +0000 UTC m=+0.836717365,LastTimestamp:2024-12-13 01:28:32.720136942 +0000 UTC m=+0.836717365,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.53,}" Dec 13 01:28:32.732374 kubelet[1792]: I1213 01:28:32.732352 1792 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:28:32.732537 kubelet[1792]: I1213 01:28:32.732512 1792 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:28:32.734185 kubelet[1792]: I1213 01:28:32.734148 1792 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:28:32.765694 kubelet[1792]: W1213 01:28:32.749590 1792 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 01:28:32.765694 kubelet[1792]: E1213 01:28:32.749648 1792 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 01:28:32.765694 kubelet[1792]: E1213 01:28:32.749865 1792 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.53.1810984ceecf6536 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.53,UID:10.0.0.53,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.53,},FirstTimestamp:2024-12-13 01:28:32.729146678 +0000 UTC m=+0.845727102,LastTimestamp:2024-12-13 01:28:32.729146678 +0000 UTC m=+0.845727102,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.53,}" Dec 13 01:28:32.766537 kubelet[1792]: E1213 01:28:32.766508 1792 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.53\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 01:28:32.776806 kubelet[1792]: I1213 01:28:32.776768 1792 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:28:32.776806 kubelet[1792]: I1213 01:28:32.776799 1792 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:28:32.776953 kubelet[1792]: I1213 01:28:32.776822 1792 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:28:32.778115 kubelet[1792]: E1213 01:28:32.778066 1792 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.53.1810984cf1951543 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.53,UID:10.0.0.53,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.53 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.53,},FirstTimestamp:2024-12-13 01:28:32.775656771 +0000 UTC m=+0.892237195,LastTimestamp:2024-12-13 01:28:32.775656771 +0000 UTC m=+0.892237195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.53,}" Dec 13 01:28:32.783404 kubelet[1792]: E1213 01:28:32.783354 1792 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.53.1810984cf19555f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.53,UID:10.0.0.53,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.53 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.53,},FirstTimestamp:2024-12-13 01:28:32.775673333 +0000 UTC m=+0.892253756,LastTimestamp:2024-12-13 01:28:32.775673333 +0000 UTC m=+0.892253756,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.53,}" Dec 13 01:28:32.788781 kubelet[1792]: E1213 01:28:32.788725 1792 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.53.1810984cf1956451 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.53,UID:10.0.0.53,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.53 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.53,},FirstTimestamp:2024-12-13 01:28:32.775677009 +0000 UTC m=+0.892257432,LastTimestamp:2024-12-13 01:28:32.775677009 +0000 UTC m=+0.892257432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.53,}" Dec 13 01:28:32.831857 kubelet[1792]: I1213 01:28:32.831813 1792 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.53" Dec 13 01:28:32.837325 kubelet[1792]: E1213 01:28:32.837262 1792 kubelet_node_status.go:96] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.53" Dec 13 01:28:32.837565 kubelet[1792]: E1213 01:28:32.837458 1792 event.go:346] "Server rejected event (will not retry!)" err="events \"10.0.0.53.1810984cf1951543\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.53.1810984cf1951543 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.53,UID:10.0.0.53,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.53 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.53,},FirstTimestamp:2024-12-13 01:28:32.775656771 +0000 UTC m=+0.892237195,LastTimestamp:2024-12-13 01:28:32.831702337 +0000 UTC m=+0.948282760,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.53,}" Dec 13 01:28:32.842773 kubelet[1792]: E1213 01:28:32.842687 1792 event.go:346] "Server rejected event (will not retry!)" err="events \"10.0.0.53.1810984cf19555f5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.53.1810984cf19555f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.53,UID:10.0.0.53,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.53 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.53,},FirstTimestamp:2024-12-13 01:28:32.775673333 +0000 UTC m=+0.892253756,LastTimestamp:2024-12-13 01:28:32.831716804 +0000 UTC m=+0.948297227,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.53,}" Dec 13 01:28:32.848345 kubelet[1792]: E1213 01:28:32.848276 1792 event.go:346] "Server rejected event (will not retry!)" err="events \"10.0.0.53.1810984cf1956451\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.53.1810984cf1956451 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.53,UID:10.0.0.53,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.53 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.53,},FirstTimestamp:2024-12-13 01:28:32.775677009 +0000 UTC m=+0.892257432,LastTimestamp:2024-12-13 01:28:32.831720822 +0000 UTC m=+0.948301255,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.53,}" Dec 13 01:28:33.034711 kubelet[1792]: E1213 01:28:33.034552 1792 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.53\" not found" node="10.0.0.53" Dec 13 01:28:33.038620 kubelet[1792]: I1213 01:28:33.038579 1792 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.53" Dec 13 01:28:33.583739 kubelet[1792]: I1213 01:28:33.583640 1792 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.53" Dec 13 01:28:33.585890 kubelet[1792]: I1213 01:28:33.585860 1792 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 01:28:33.586730 containerd[1466]: time="2024-12-13T01:28:33.586648891Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:28:33.587159 kubelet[1792]: I1213 01:28:33.586960 1792 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 01:28:33.607293 kubelet[1792]: I1213 01:28:33.607248 1792 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:28:33.609082 kubelet[1792]: I1213 01:28:33.609038 1792 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:28:33.609194 kubelet[1792]: I1213 01:28:33.609093 1792 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:28:33.609194 kubelet[1792]: I1213 01:28:33.609125 1792 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:28:33.609275 kubelet[1792]: E1213 01:28:33.609255 1792 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:28:33.645577 kubelet[1792]: I1213 01:28:33.645513 1792 policy_none.go:49] "None policy: Start" Dec 13 01:28:33.646399 kubelet[1792]: I1213 01:28:33.646379 1792 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:28:33.646461 kubelet[1792]: I1213 01:28:33.646410 1792 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:28:33.677740 kubelet[1792]: E1213 01:28:33.677687 1792 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Dec 13 01:28:33.679770 kubelet[1792]: I1213 01:28:33.679735 1792 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 01:28:33.679907 kubelet[1792]: W1213 01:28:33.679880 1792 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:28:33.679907 kubelet[1792]: W1213 01:28:33.679880 1792 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:28:33.680091 kubelet[1792]: E1213 01:28:33.680058 1792 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.47:6443/api/v1/namespaces/default/events\": read tcp 10.0.0.53:60448->10.0.0.47:6443: use of closed network connection" event="&Event{ObjectMeta:{10.0.0.53.1810984cf19555f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.53,UID:10.0.0.53,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.53 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.53,},FirstTimestamp:2024-12-13 01:28:32.775673333 +0000 UTC m=+0.892253756,LastTimestamp:2024-12-13 01:28:33.038518291 +0000 UTC m=+1.155098714,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.53,}" Dec 13 01:28:33.710315 kubelet[1792]: E1213 01:28:33.710242 1792 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:28:33.713568 kubelet[1792]: E1213 01:28:33.713509 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:33.759307 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:28:33.777824 kubelet[1792]: E1213 01:28:33.777769 1792 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Dec 13 01:28:33.780461 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:28:33.785216 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:28:33.794072 kubelet[1792]: I1213 01:28:33.793957 1792 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:28:33.794418 kubelet[1792]: I1213 01:28:33.794388 1792 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:28:33.796168 kubelet[1792]: E1213 01:28:33.795876 1792 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.53\" not found" Dec 13 01:28:33.878620 kubelet[1792]: E1213 01:28:33.878439 1792 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Dec 13 01:28:33.979777 kubelet[1792]: E1213 01:28:33.979663 1792 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Dec 13 01:28:34.080404 kubelet[1792]: E1213 01:28:34.080338 1792 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Dec 13 01:28:34.181967 kubelet[1792]: E1213 01:28:34.181762 1792 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Dec 13 01:28:34.323028 sudo[1658]: pam_unix(sudo:session): session closed for user root Dec 13 01:28:34.325594 sshd[1655]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:34.330872 systemd[1]: sshd@8-10.0.0.53:22-10.0.0.1:35594.service: Deactivated successfully. Dec 13 01:28:34.332994 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:28:34.333198 systemd[1]: session-9.scope: Consumed 1.255s CPU time, 108.7M memory peak, 0B memory swap peak. Dec 13 01:28:34.333985 systemd-logind[1454]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:28:34.335338 systemd-logind[1454]: Removed session 9. Dec 13 01:28:34.714372 kubelet[1792]: I1213 01:28:34.714302 1792 apiserver.go:52] "Watching apiserver" Dec 13 01:28:34.714588 kubelet[1792]: E1213 01:28:34.714342 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:34.719832 kubelet[1792]: I1213 01:28:34.719774 1792 topology_manager.go:215] "Topology Admit Handler" podUID="a5934e4d-ce01-4a3b-a088-80117779d8e0" podNamespace="kube-system" podName="cilium-rdh78" Dec 13 01:28:34.719983 kubelet[1792]: I1213 01:28:34.719958 1792 topology_manager.go:215] "Topology Admit Handler" podUID="7547f53c-4d71-4d5c-8b60-de9bf7cb8f1b" podNamespace="kube-system" podName="kube-proxy-zm7xs" Dec 13 01:28:34.731140 kubelet[1792]: I1213 01:28:34.731085 1792 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:28:34.731107 systemd[1]: Created slice kubepods-besteffort-pod7547f53c_4d71_4d5c_8b60_de9bf7cb8f1b.slice - libcontainer container kubepods-besteffort-pod7547f53c_4d71_4d5c_8b60_de9bf7cb8f1b.slice. Dec 13 01:28:34.741024 kubelet[1792]: I1213 01:28:34.740974 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7547f53c-4d71-4d5c-8b60-de9bf7cb8f1b-lib-modules\") pod \"kube-proxy-zm7xs\" (UID: \"7547f53c-4d71-4d5c-8b60-de9bf7cb8f1b\") " pod="kube-system/kube-proxy-zm7xs" Dec 13 01:28:34.741148 kubelet[1792]: I1213 01:28:34.741032 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpvnq\" (UniqueName: \"kubernetes.io/projected/a5934e4d-ce01-4a3b-a088-80117779d8e0-kube-api-access-xpvnq\") pod \"cilium-rdh78\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " pod="kube-system/cilium-rdh78" Dec 13 01:28:34.741148 kubelet[1792]: I1213 01:28:34.741068 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-bpf-maps\") pod \"cilium-rdh78\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " pod="kube-system/cilium-rdh78" Dec 13 01:28:34.741148 kubelet[1792]: I1213 01:28:34.741098 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-lib-modules\") pod \"cilium-rdh78\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " pod="kube-system/cilium-rdh78" Dec 13 01:28:34.741148 kubelet[1792]: I1213 01:28:34.741123 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-xtables-lock\") pod \"cilium-rdh78\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " pod="kube-system/cilium-rdh78" Dec 13 01:28:34.741278 kubelet[1792]: I1213 01:28:34.741156 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-host-proc-sys-net\") pod \"cilium-rdh78\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " pod="kube-system/cilium-rdh78" Dec 13 01:28:34.741278 kubelet[1792]: I1213 01:28:34.741182 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7547f53c-4d71-4d5c-8b60-de9bf7cb8f1b-kube-proxy\") pod \"kube-proxy-zm7xs\" (UID: \"7547f53c-4d71-4d5c-8b60-de9bf7cb8f1b\") " pod="kube-system/kube-proxy-zm7xs" Dec 13 01:28:34.741278 kubelet[1792]: I1213 01:28:34.741209 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-etc-cni-netd\") pod \"cilium-rdh78\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " pod="kube-system/cilium-rdh78" Dec 13 01:28:34.741278 kubelet[1792]: I1213 01:28:34.741234 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5934e4d-ce01-4a3b-a088-80117779d8e0-cilium-config-path\") pod \"cilium-rdh78\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " pod="kube-system/cilium-rdh78" Dec 13 01:28:34.741278 kubelet[1792]: I1213 01:28:34.741258 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-hostproc\") pod \"cilium-rdh78\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " pod="kube-system/cilium-rdh78" Dec 13 01:28:34.741427 kubelet[1792]: I1213 01:28:34.741281 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-cilium-cgroup\") pod \"cilium-rdh78\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " pod="kube-system/cilium-rdh78" Dec 13 01:28:34.741427 kubelet[1792]: I1213 01:28:34.741305 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-cni-path\") pod \"cilium-rdh78\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " pod="kube-system/cilium-rdh78" Dec 13 01:28:34.741427 kubelet[1792]: I1213 01:28:34.741329 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a5934e4d-ce01-4a3b-a088-80117779d8e0-clustermesh-secrets\") pod \"cilium-rdh78\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " pod="kube-system/cilium-rdh78" Dec 13 01:28:34.741427 kubelet[1792]: I1213 01:28:34.741353 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-host-proc-sys-kernel\") pod \"cilium-rdh78\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " pod="kube-system/cilium-rdh78" Dec 13 01:28:34.741427 kubelet[1792]: I1213 01:28:34.741378 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7547f53c-4d71-4d5c-8b60-de9bf7cb8f1b-xtables-lock\") pod \"kube-proxy-zm7xs\" (UID: \"7547f53c-4d71-4d5c-8b60-de9bf7cb8f1b\") " pod="kube-system/kube-proxy-zm7xs" Dec 13 01:28:34.741608 kubelet[1792]: I1213 01:28:34.741404 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lpbt\" (UniqueName: \"kubernetes.io/projected/7547f53c-4d71-4d5c-8b60-de9bf7cb8f1b-kube-api-access-6lpbt\") pod \"kube-proxy-zm7xs\" (UID: \"7547f53c-4d71-4d5c-8b60-de9bf7cb8f1b\") " pod="kube-system/kube-proxy-zm7xs" Dec 13 01:28:34.741608 kubelet[1792]: I1213 01:28:34.741428 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-cilium-run\") pod \"cilium-rdh78\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " pod="kube-system/cilium-rdh78" Dec 13 01:28:34.741608 kubelet[1792]: I1213 01:28:34.741454 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a5934e4d-ce01-4a3b-a088-80117779d8e0-hubble-tls\") pod \"cilium-rdh78\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " pod="kube-system/cilium-rdh78" Dec 13 01:28:34.742023 systemd[1]: Created slice kubepods-burstable-poda5934e4d_ce01_4a3b_a088_80117779d8e0.slice - libcontainer container kubepods-burstable-poda5934e4d_ce01_4a3b_a088_80117779d8e0.slice. Dec 13 01:28:35.040459 kubelet[1792]: E1213 01:28:35.040312 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:35.041412 containerd[1466]: time="2024-12-13T01:28:35.041348071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zm7xs,Uid:7547f53c-4d71-4d5c-8b60-de9bf7cb8f1b,Namespace:kube-system,Attempt:0,}" Dec 13 01:28:35.055811 kubelet[1792]: E1213 01:28:35.055756 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:35.056444 containerd[1466]: time="2024-12-13T01:28:35.056402734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rdh78,Uid:a5934e4d-ce01-4a3b-a088-80117779d8e0,Namespace:kube-system,Attempt:0,}" Dec 13 01:28:35.715101 kubelet[1792]: E1213 01:28:35.715028 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:35.741175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2339273273.mount: Deactivated successfully. Dec 13 01:28:35.798029 containerd[1466]: time="2024-12-13T01:28:35.797884197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:28:35.799366 containerd[1466]: time="2024-12-13T01:28:35.799268703Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:28:35.800240 containerd[1466]: time="2024-12-13T01:28:35.800147171Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:28:35.802447 containerd[1466]: time="2024-12-13T01:28:35.802354310Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:28:35.804004 containerd[1466]: time="2024-12-13T01:28:35.803936006Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:28:35.807468 containerd[1466]: time="2024-12-13T01:28:35.807403519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:28:35.808712 containerd[1466]: time="2024-12-13T01:28:35.808662931Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 767.104735ms" Dec 13 01:28:35.811706 containerd[1466]: time="2024-12-13T01:28:35.811631808Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 755.109309ms" Dec 13 01:28:36.022688 containerd[1466]: time="2024-12-13T01:28:36.021683731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:36.022688 containerd[1466]: time="2024-12-13T01:28:36.021767027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:36.022688 containerd[1466]: time="2024-12-13T01:28:36.021806291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:36.022688 containerd[1466]: time="2024-12-13T01:28:36.021942666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:36.026847 containerd[1466]: time="2024-12-13T01:28:36.026705268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:36.026847 containerd[1466]: time="2024-12-13T01:28:36.026792171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:36.026847 containerd[1466]: time="2024-12-13T01:28:36.026832256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:36.028556 containerd[1466]: time="2024-12-13T01:28:36.026971908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:36.260756 systemd[1]: Started cri-containerd-591dbb40f05f199c00cb95a082e562c023bdd0d92bcd84d784e02a328c3b2ecc.scope - libcontainer container 591dbb40f05f199c00cb95a082e562c023bdd0d92bcd84d784e02a328c3b2ecc. Dec 13 01:28:36.267026 systemd[1]: Started cri-containerd-4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785.scope - libcontainer container 4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785. Dec 13 01:28:36.361819 containerd[1466]: time="2024-12-13T01:28:36.361660699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zm7xs,Uid:7547f53c-4d71-4d5c-8b60-de9bf7cb8f1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"591dbb40f05f199c00cb95a082e562c023bdd0d92bcd84d784e02a328c3b2ecc\"" Dec 13 01:28:36.363354 kubelet[1792]: E1213 01:28:36.363330 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:36.365180 containerd[1466]: time="2024-12-13T01:28:36.365144402Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:28:36.373597 containerd[1466]: time="2024-12-13T01:28:36.373457252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rdh78,Uid:a5934e4d-ce01-4a3b-a088-80117779d8e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785\"" Dec 13 01:28:36.374414 kubelet[1792]: E1213 01:28:36.374381 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:36.715917 kubelet[1792]: E1213 01:28:36.715755 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:37.715998 kubelet[1792]: E1213 01:28:37.715933 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:37.797789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3231088441.mount: Deactivated successfully. Dec 13 01:28:38.327311 containerd[1466]: time="2024-12-13T01:28:38.327237587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:38.328031 containerd[1466]: time="2024-12-13T01:28:38.327984037Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 01:28:38.329445 containerd[1466]: time="2024-12-13T01:28:38.329328048Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:38.331681 containerd[1466]: time="2024-12-13T01:28:38.331639933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:38.332346 containerd[1466]: time="2024-12-13T01:28:38.332305441Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.96711328s" Dec 13 01:28:38.332380 containerd[1466]: time="2024-12-13T01:28:38.332345146Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:28:38.333208 containerd[1466]: time="2024-12-13T01:28:38.333173740Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:28:38.334558 containerd[1466]: time="2024-12-13T01:28:38.334522529Z" level=info msg="CreateContainer within sandbox \"591dbb40f05f199c00cb95a082e562c023bdd0d92bcd84d784e02a328c3b2ecc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:28:38.355081 containerd[1466]: time="2024-12-13T01:28:38.355024758Z" level=info msg="CreateContainer within sandbox \"591dbb40f05f199c00cb95a082e562c023bdd0d92bcd84d784e02a328c3b2ecc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fe79aca8372cc57628595a6e9f41be84c6a6a593445c8ff87e08f41b64e1d278\"" Dec 13 01:28:38.355956 containerd[1466]: time="2024-12-13T01:28:38.355904117Z" level=info msg="StartContainer for \"fe79aca8372cc57628595a6e9f41be84c6a6a593445c8ff87e08f41b64e1d278\"" Dec 13 01:28:38.397703 systemd[1]: Started cri-containerd-fe79aca8372cc57628595a6e9f41be84c6a6a593445c8ff87e08f41b64e1d278.scope - libcontainer container fe79aca8372cc57628595a6e9f41be84c6a6a593445c8ff87e08f41b64e1d278. Dec 13 01:28:38.461815 containerd[1466]: time="2024-12-13T01:28:38.461759955Z" level=info msg="StartContainer for \"fe79aca8372cc57628595a6e9f41be84c6a6a593445c8ff87e08f41b64e1d278\" returns successfully" Dec 13 01:28:38.620999 kubelet[1792]: E1213 01:28:38.620883 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:38.633253 kubelet[1792]: I1213 01:28:38.633217 1792 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zm7xs" podStartSLOduration=3.664929827 podStartE2EDuration="5.633155783s" podCreationTimestamp="2024-12-13 01:28:33 +0000 UTC" firstStartedPulling="2024-12-13 01:28:36.364611212 +0000 UTC m=+4.481191625" lastFinishedPulling="2024-12-13 01:28:38.332837158 +0000 UTC m=+6.449417581" observedRunningTime="2024-12-13 01:28:38.632873804 +0000 UTC m=+6.749454237" watchObservedRunningTime="2024-12-13 01:28:38.633155783 +0000 UTC m=+6.749736196" Dec 13 01:28:38.716444 kubelet[1792]: E1213 01:28:38.716366 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:39.623446 kubelet[1792]: E1213 01:28:39.623396 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:39.717229 kubelet[1792]: E1213 01:28:39.717162 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:40.717535 kubelet[1792]: E1213 01:28:40.717313 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:41.718278 kubelet[1792]: E1213 01:28:41.718179 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:42.719516 kubelet[1792]: E1213 01:28:42.718514 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:43.719652 kubelet[1792]: E1213 01:28:43.719603 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:44.661081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount55624494.mount: Deactivated successfully. Dec 13 01:28:44.720761 kubelet[1792]: E1213 01:28:44.720682 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:45.720916 kubelet[1792]: E1213 01:28:45.720848 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:46.722040 kubelet[1792]: E1213 01:28:46.721963 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:47.722192 kubelet[1792]: E1213 01:28:47.722143 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:48.454299 containerd[1466]: time="2024-12-13T01:28:48.454214524Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:48.455690 containerd[1466]: time="2024-12-13T01:28:48.455616373Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735335" Dec 13 01:28:48.457349 containerd[1466]: time="2024-12-13T01:28:48.457298477Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:48.460294 containerd[1466]: time="2024-12-13T01:28:48.460209506Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.126897328s" Dec 13 01:28:48.460294 containerd[1466]: time="2024-12-13T01:28:48.460282694Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:28:48.462805 containerd[1466]: time="2024-12-13T01:28:48.462731306Z" level=info msg="CreateContainer within sandbox \"4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:28:48.487947 containerd[1466]: time="2024-12-13T01:28:48.487859310Z" level=info msg="CreateContainer within sandbox \"4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1cefee057a92c5333bfea7231db0ba8f4308e2d457aac8acd98c90440680d5da\"" Dec 13 01:28:48.488937 containerd[1466]: time="2024-12-13T01:28:48.488894451Z" level=info msg="StartContainer for \"1cefee057a92c5333bfea7231db0ba8f4308e2d457aac8acd98c90440680d5da\"" Dec 13 01:28:48.526850 systemd[1]: Started cri-containerd-1cefee057a92c5333bfea7231db0ba8f4308e2d457aac8acd98c90440680d5da.scope - libcontainer container 1cefee057a92c5333bfea7231db0ba8f4308e2d457aac8acd98c90440680d5da. Dec 13 01:28:48.558373 containerd[1466]: time="2024-12-13T01:28:48.558294511Z" level=info msg="StartContainer for \"1cefee057a92c5333bfea7231db0ba8f4308e2d457aac8acd98c90440680d5da\" returns successfully" Dec 13 01:28:48.573132 systemd[1]: cri-containerd-1cefee057a92c5333bfea7231db0ba8f4308e2d457aac8acd98c90440680d5da.scope: Deactivated successfully. Dec 13 01:28:48.723125 kubelet[1792]: E1213 01:28:48.722874 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:48.736101 kubelet[1792]: E1213 01:28:48.736040 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:49.270259 containerd[1466]: time="2024-12-13T01:28:49.270150307Z" level=info msg="shim disconnected" id=1cefee057a92c5333bfea7231db0ba8f4308e2d457aac8acd98c90440680d5da namespace=k8s.io Dec 13 01:28:49.270259 containerd[1466]: time="2024-12-13T01:28:49.270250769Z" level=warning msg="cleaning up after shim disconnected" id=1cefee057a92c5333bfea7231db0ba8f4308e2d457aac8acd98c90440680d5da namespace=k8s.io Dec 13 01:28:49.270259 containerd[1466]: time="2024-12-13T01:28:49.270266159Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:49.477449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cefee057a92c5333bfea7231db0ba8f4308e2d457aac8acd98c90440680d5da-rootfs.mount: Deactivated successfully. Dec 13 01:28:49.723433 kubelet[1792]: E1213 01:28:49.723312 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:49.742142 kubelet[1792]: E1213 01:28:49.742083 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:49.745172 containerd[1466]: time="2024-12-13T01:28:49.745086998Z" level=info msg="CreateContainer within sandbox \"4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:28:49.768953 containerd[1466]: time="2024-12-13T01:28:49.768894374Z" level=info msg="CreateContainer within sandbox \"4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a14ea8a06ba25ceddcf4f943f7403a8b8f959faaa823cb2e5f0c5ad4e0817c24\"" Dec 13 01:28:49.770145 containerd[1466]: time="2024-12-13T01:28:49.770112204Z" level=info msg="StartContainer for \"a14ea8a06ba25ceddcf4f943f7403a8b8f959faaa823cb2e5f0c5ad4e0817c24\"" Dec 13 01:28:49.805753 systemd[1]: Started cri-containerd-a14ea8a06ba25ceddcf4f943f7403a8b8f959faaa823cb2e5f0c5ad4e0817c24.scope - libcontainer container a14ea8a06ba25ceddcf4f943f7403a8b8f959faaa823cb2e5f0c5ad4e0817c24. Dec 13 01:28:49.835607 containerd[1466]: time="2024-12-13T01:28:49.835431808Z" level=info msg="StartContainer for \"a14ea8a06ba25ceddcf4f943f7403a8b8f959faaa823cb2e5f0c5ad4e0817c24\" returns successfully" Dec 13 01:28:49.847452 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:28:49.847724 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:28:49.847798 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:28:49.854895 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:28:49.855227 systemd[1]: cri-containerd-a14ea8a06ba25ceddcf4f943f7403a8b8f959faaa823cb2e5f0c5ad4e0817c24.scope: Deactivated successfully. Dec 13 01:28:49.872681 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:28:49.887389 containerd[1466]: time="2024-12-13T01:28:49.887279023Z" level=info msg="shim disconnected" id=a14ea8a06ba25ceddcf4f943f7403a8b8f959faaa823cb2e5f0c5ad4e0817c24 namespace=k8s.io Dec 13 01:28:49.887389 containerd[1466]: time="2024-12-13T01:28:49.887364987Z" level=warning msg="cleaning up after shim disconnected" id=a14ea8a06ba25ceddcf4f943f7403a8b8f959faaa823cb2e5f0c5ad4e0817c24 namespace=k8s.io Dec 13 01:28:49.887389 containerd[1466]: time="2024-12-13T01:28:49.887377131Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:50.477952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a14ea8a06ba25ceddcf4f943f7403a8b8f959faaa823cb2e5f0c5ad4e0817c24-rootfs.mount: Deactivated successfully. Dec 13 01:28:50.724242 kubelet[1792]: E1213 01:28:50.724190 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:50.746180 kubelet[1792]: E1213 01:28:50.746014 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:50.748769 containerd[1466]: time="2024-12-13T01:28:50.748705109Z" level=info msg="CreateContainer within sandbox \"4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:28:50.802636 containerd[1466]: time="2024-12-13T01:28:50.802547657Z" level=info msg="CreateContainer within sandbox \"4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4ebf102b411b9f60fa26501083a19255628429614b23b2cab816f1980944ea6f\"" Dec 13 01:28:50.803828 containerd[1466]: time="2024-12-13T01:28:50.803754784Z" level=info msg="StartContainer for \"4ebf102b411b9f60fa26501083a19255628429614b23b2cab816f1980944ea6f\"" Dec 13 01:28:50.845758 systemd[1]: Started cri-containerd-4ebf102b411b9f60fa26501083a19255628429614b23b2cab816f1980944ea6f.scope - libcontainer container 4ebf102b411b9f60fa26501083a19255628429614b23b2cab816f1980944ea6f. Dec 13 01:28:50.886692 containerd[1466]: time="2024-12-13T01:28:50.886622097Z" level=info msg="StartContainer for \"4ebf102b411b9f60fa26501083a19255628429614b23b2cab816f1980944ea6f\" returns successfully" Dec 13 01:28:50.886721 systemd[1]: cri-containerd-4ebf102b411b9f60fa26501083a19255628429614b23b2cab816f1980944ea6f.scope: Deactivated successfully. Dec 13 01:28:50.928691 containerd[1466]: time="2024-12-13T01:28:50.927283624Z" level=info msg="shim disconnected" id=4ebf102b411b9f60fa26501083a19255628429614b23b2cab816f1980944ea6f namespace=k8s.io Dec 13 01:28:50.928691 containerd[1466]: time="2024-12-13T01:28:50.927360300Z" level=warning msg="cleaning up after shim disconnected" id=4ebf102b411b9f60fa26501083a19255628429614b23b2cab816f1980944ea6f namespace=k8s.io Dec 13 01:28:50.928691 containerd[1466]: time="2024-12-13T01:28:50.927372474Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:51.479061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ebf102b411b9f60fa26501083a19255628429614b23b2cab816f1980944ea6f-rootfs.mount: Deactivated successfully. Dec 13 01:28:51.725139 kubelet[1792]: E1213 01:28:51.725003 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:51.760259 kubelet[1792]: E1213 01:28:51.760098 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:51.763784 containerd[1466]: time="2024-12-13T01:28:51.763712991Z" level=info msg="CreateContainer within sandbox \"4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:28:51.824163 containerd[1466]: time="2024-12-13T01:28:51.824083576Z" level=info msg="CreateContainer within sandbox \"4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"84d22679f8df122b85da52a821cb4615034ee6add872d798fde94ef078dca0ea\"" Dec 13 01:28:51.825069 containerd[1466]: time="2024-12-13T01:28:51.825008982Z" level=info msg="StartContainer for \"84d22679f8df122b85da52a821cb4615034ee6add872d798fde94ef078dca0ea\"" Dec 13 01:28:51.864771 systemd[1]: Started cri-containerd-84d22679f8df122b85da52a821cb4615034ee6add872d798fde94ef078dca0ea.scope - libcontainer container 84d22679f8df122b85da52a821cb4615034ee6add872d798fde94ef078dca0ea. Dec 13 01:28:51.901126 systemd[1]: cri-containerd-84d22679f8df122b85da52a821cb4615034ee6add872d798fde94ef078dca0ea.scope: Deactivated successfully. Dec 13 01:28:51.907230 containerd[1466]: time="2024-12-13T01:28:51.906962982Z" level=info msg="StartContainer for \"84d22679f8df122b85da52a821cb4615034ee6add872d798fde94ef078dca0ea\" returns successfully" Dec 13 01:28:51.940908 containerd[1466]: time="2024-12-13T01:28:51.940786823Z" level=info msg="shim disconnected" id=84d22679f8df122b85da52a821cb4615034ee6add872d798fde94ef078dca0ea namespace=k8s.io Dec 13 01:28:51.940908 containerd[1466]: time="2024-12-13T01:28:51.940871293Z" level=warning msg="cleaning up after shim disconnected" id=84d22679f8df122b85da52a821cb4615034ee6add872d798fde94ef078dca0ea namespace=k8s.io Dec 13 01:28:51.940908 containerd[1466]: time="2024-12-13T01:28:51.940884940Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:52.478992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84d22679f8df122b85da52a821cb4615034ee6add872d798fde94ef078dca0ea-rootfs.mount: Deactivated successfully. Dec 13 01:28:52.713601 kubelet[1792]: E1213 01:28:52.713532 1792 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:52.725865 kubelet[1792]: E1213 01:28:52.725766 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:52.765855 kubelet[1792]: E1213 01:28:52.765728 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:52.769118 containerd[1466]: time="2024-12-13T01:28:52.768901304Z" level=info msg="CreateContainer within sandbox \"4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:28:52.917563 containerd[1466]: time="2024-12-13T01:28:52.917457475Z" level=info msg="CreateContainer within sandbox \"4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a49b5bf651d92e96addf83a5696f324720beb0d0a36bc9359dc0716f4f7c1232\"" Dec 13 01:28:52.918498 containerd[1466]: time="2024-12-13T01:28:52.918432003Z" level=info msg="StartContainer for \"a49b5bf651d92e96addf83a5696f324720beb0d0a36bc9359dc0716f4f7c1232\"" Dec 13 01:28:52.963886 systemd[1]: Started cri-containerd-a49b5bf651d92e96addf83a5696f324720beb0d0a36bc9359dc0716f4f7c1232.scope - libcontainer container a49b5bf651d92e96addf83a5696f324720beb0d0a36bc9359dc0716f4f7c1232. Dec 13 01:28:53.009668 containerd[1466]: time="2024-12-13T01:28:53.009594802Z" level=info msg="StartContainer for \"a49b5bf651d92e96addf83a5696f324720beb0d0a36bc9359dc0716f4f7c1232\" returns successfully" Dec 13 01:28:53.159258 kubelet[1792]: I1213 01:28:53.159119 1792 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:28:53.615812 kernel: Initializing XFRM netlink socket Dec 13 01:28:53.727018 kubelet[1792]: E1213 01:28:53.726927 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:53.773395 kubelet[1792]: E1213 01:28:53.773312 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:53.804764 kubelet[1792]: I1213 01:28:53.804705 1792 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rdh78" podStartSLOduration=8.719039368 podStartE2EDuration="20.80463825s" podCreationTimestamp="2024-12-13 01:28:33 +0000 UTC" firstStartedPulling="2024-12-13 01:28:36.375076548 +0000 UTC m=+4.491656971" lastFinishedPulling="2024-12-13 01:28:48.46067543 +0000 UTC m=+16.577255853" observedRunningTime="2024-12-13 01:28:53.80398663 +0000 UTC m=+21.920567084" watchObservedRunningTime="2024-12-13 01:28:53.80463825 +0000 UTC m=+21.921218673" Dec 13 01:28:54.727434 kubelet[1792]: E1213 01:28:54.727359 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:54.777533 kubelet[1792]: E1213 01:28:54.777500 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:55.380244 systemd-networkd[1407]: cilium_host: Link UP Dec 13 01:28:55.380560 systemd-networkd[1407]: cilium_net: Link UP Dec 13 01:28:55.380565 systemd-networkd[1407]: cilium_net: Gained carrier Dec 13 01:28:55.380846 systemd-networkd[1407]: cilium_host: Gained carrier Dec 13 01:28:55.381506 systemd-networkd[1407]: cilium_host: Gained IPv6LL Dec 13 01:28:55.572941 systemd-networkd[1407]: cilium_vxlan: Link UP Dec 13 01:28:55.572962 systemd-networkd[1407]: cilium_vxlan: Gained carrier Dec 13 01:28:55.728116 kubelet[1792]: E1213 01:28:55.728017 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:55.780735 kubelet[1792]: E1213 01:28:55.780699 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:55.913538 kernel: NET: Registered PF_ALG protocol family Dec 13 01:28:56.023432 kubelet[1792]: I1213 01:28:56.023244 1792 topology_manager.go:215] "Topology Admit Handler" podUID="d443ed64-6ddb-4946-a512-5fb78be426a5" podNamespace="default" podName="nginx-deployment-6d5f899847-dscnc" Dec 13 01:28:56.049517 systemd[1]: Created slice kubepods-besteffort-podd443ed64_6ddb_4946_a512_5fb78be426a5.slice - libcontainer container kubepods-besteffort-podd443ed64_6ddb_4946_a512_5fb78be426a5.slice. Dec 13 01:28:56.054294 systemd-networkd[1407]: cilium_net: Gained IPv6LL Dec 13 01:28:56.159566 kubelet[1792]: I1213 01:28:56.159468 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bttlx\" (UniqueName: \"kubernetes.io/projected/d443ed64-6ddb-4946-a512-5fb78be426a5-kube-api-access-bttlx\") pod \"nginx-deployment-6d5f899847-dscnc\" (UID: \"d443ed64-6ddb-4946-a512-5fb78be426a5\") " pod="default/nginx-deployment-6d5f899847-dscnc" Dec 13 01:28:56.357519 containerd[1466]: time="2024-12-13T01:28:56.357350483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-dscnc,Uid:d443ed64-6ddb-4946-a512-5fb78be426a5,Namespace:default,Attempt:0,}" Dec 13 01:28:56.730733 kubelet[1792]: E1213 01:28:56.730544 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:57.458611 systemd-networkd[1407]: lxc_health: Link UP Dec 13 01:28:57.493802 systemd-networkd[1407]: lxc_health: Gained carrier Dec 13 01:28:57.651773 systemd-networkd[1407]: cilium_vxlan: Gained IPv6LL Dec 13 01:28:57.731712 kubelet[1792]: E1213 01:28:57.731393 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:57.991282 systemd-networkd[1407]: lxc91d863fdcf6b: Link UP Dec 13 01:28:58.013787 kernel: eth0: renamed from tmp25d2c Dec 13 01:28:58.020459 systemd-networkd[1407]: lxc91d863fdcf6b: Gained carrier Dec 13 01:28:58.731992 kubelet[1792]: E1213 01:28:58.731915 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:28:58.803362 systemd-networkd[1407]: lxc_health: Gained IPv6LL Dec 13 01:28:59.060002 kubelet[1792]: E1213 01:28:59.059470 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:59.123853 systemd-networkd[1407]: lxc91d863fdcf6b: Gained IPv6LL Dec 13 01:28:59.732713 kubelet[1792]: E1213 01:28:59.732622 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:00.733283 kubelet[1792]: E1213 01:29:00.733186 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:00.880179 kubelet[1792]: I1213 01:29:00.878798 1792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:29:00.880179 kubelet[1792]: E1213 01:29:00.879812 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:01.733821 kubelet[1792]: E1213 01:29:01.733730 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:01.814099 kubelet[1792]: E1213 01:29:01.813576 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:01.976513 update_engine[1458]: I20241213 01:29:01.975245 1458 update_attempter.cc:509] Updating boot flags... Dec 13 01:29:02.037607 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2870) Dec 13 01:29:02.132599 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2872) Dec 13 01:29:02.166730 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2872) Dec 13 01:29:02.734662 kubelet[1792]: E1213 01:29:02.734595 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:03.433948 containerd[1466]: time="2024-12-13T01:29:03.433724884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:03.433948 containerd[1466]: time="2024-12-13T01:29:03.433915064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:03.434532 containerd[1466]: time="2024-12-13T01:29:03.433974947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:03.434532 containerd[1466]: time="2024-12-13T01:29:03.434130050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:03.451128 systemd[1]: run-containerd-runc-k8s.io-25d2c0337308ab34878e535d2b8ff5dc7f1e255087771b2580c0b6100a242893-runc.fQxNA9.mount: Deactivated successfully. Dec 13 01:29:03.461643 systemd[1]: Started cri-containerd-25d2c0337308ab34878e535d2b8ff5dc7f1e255087771b2580c0b6100a242893.scope - libcontainer container 25d2c0337308ab34878e535d2b8ff5dc7f1e255087771b2580c0b6100a242893. Dec 13 01:29:03.474117 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:29:03.502129 containerd[1466]: time="2024-12-13T01:29:03.502075980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-dscnc,Uid:d443ed64-6ddb-4946-a512-5fb78be426a5,Namespace:default,Attempt:0,} returns sandbox id \"25d2c0337308ab34878e535d2b8ff5dc7f1e255087771b2580c0b6100a242893\"" Dec 13 01:29:03.504242 containerd[1466]: time="2024-12-13T01:29:03.504123381Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:29:03.735501 kubelet[1792]: E1213 01:29:03.735419 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:04.735866 kubelet[1792]: E1213 01:29:04.735764 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:05.736141 kubelet[1792]: E1213 01:29:05.736047 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:06.209500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1575740188.mount: Deactivated successfully. Dec 13 01:29:06.736835 kubelet[1792]: E1213 01:29:06.736748 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:07.545947 containerd[1466]: time="2024-12-13T01:29:07.545856822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:07.546690 containerd[1466]: time="2024-12-13T01:29:07.546633257Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036027" Dec 13 01:29:07.549212 containerd[1466]: time="2024-12-13T01:29:07.549168862Z" level=info msg="ImageCreate event name:\"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:07.552573 containerd[1466]: time="2024-12-13T01:29:07.552524453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:07.553913 containerd[1466]: time="2024-12-13T01:29:07.553868229Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 4.049704032s" Dec 13 01:29:07.554002 containerd[1466]: time="2024-12-13T01:29:07.553919487Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 01:29:07.556692 containerd[1466]: time="2024-12-13T01:29:07.556627696Z" level=info msg="CreateContainer within sandbox \"25d2c0337308ab34878e535d2b8ff5dc7f1e255087771b2580c0b6100a242893\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 01:29:07.574402 containerd[1466]: time="2024-12-13T01:29:07.574331995Z" level=info msg="CreateContainer within sandbox \"25d2c0337308ab34878e535d2b8ff5dc7f1e255087771b2580c0b6100a242893\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"cf2a0176428dbf020259c73c751c74a22107a7bcc1234e3fdca4ddd68847f670\"" Dec 13 01:29:07.575311 containerd[1466]: time="2024-12-13T01:29:07.575256069Z" level=info msg="StartContainer for \"cf2a0176428dbf020259c73c751c74a22107a7bcc1234e3fdca4ddd68847f670\"" Dec 13 01:29:07.737977 kubelet[1792]: E1213 01:29:07.737870 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:08.345666 systemd[1]: Started cri-containerd-cf2a0176428dbf020259c73c751c74a22107a7bcc1234e3fdca4ddd68847f670.scope - libcontainer container cf2a0176428dbf020259c73c751c74a22107a7bcc1234e3fdca4ddd68847f670. Dec 13 01:29:08.507350 containerd[1466]: time="2024-12-13T01:29:08.507249783Z" level=info msg="StartContainer for \"cf2a0176428dbf020259c73c751c74a22107a7bcc1234e3fdca4ddd68847f670\" returns successfully" Dec 13 01:29:08.738217 kubelet[1792]: E1213 01:29:08.738141 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:08.861845 kubelet[1792]: I1213 01:29:08.861786 1792 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-dscnc" podStartSLOduration=9.811051077 podStartE2EDuration="13.861696686s" podCreationTimestamp="2024-12-13 01:28:55 +0000 UTC" firstStartedPulling="2024-12-13 01:29:03.503855925 +0000 UTC m=+31.620436348" lastFinishedPulling="2024-12-13 01:29:07.554501534 +0000 UTC m=+35.671081957" observedRunningTime="2024-12-13 01:29:08.861339602 +0000 UTC m=+36.977920036" watchObservedRunningTime="2024-12-13 01:29:08.861696686 +0000 UTC m=+36.978277139" Dec 13 01:29:09.738804 kubelet[1792]: E1213 01:29:09.738686 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:10.739830 kubelet[1792]: E1213 01:29:10.739717 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:11.740744 kubelet[1792]: E1213 01:29:11.740665 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:12.713071 kubelet[1792]: E1213 01:29:12.712956 1792 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:12.741389 kubelet[1792]: E1213 01:29:12.741319 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:13.742046 kubelet[1792]: E1213 01:29:13.741925 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:14.742942 kubelet[1792]: E1213 01:29:14.742879 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:15.010844 kubelet[1792]: I1213 01:29:15.010692 1792 topology_manager.go:215] "Topology Admit Handler" podUID="a1b34519-668a-49b4-aba3-74414be778a5" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 01:29:15.016856 systemd[1]: Created slice kubepods-besteffort-poda1b34519_668a_49b4_aba3_74414be778a5.slice - libcontainer container kubepods-besteffort-poda1b34519_668a_49b4_aba3_74414be778a5.slice. Dec 13 01:29:15.019228 kubelet[1792]: I1213 01:29:15.019190 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/a1b34519-668a-49b4-aba3-74414be778a5-data\") pod \"nfs-server-provisioner-0\" (UID: \"a1b34519-668a-49b4-aba3-74414be778a5\") " pod="default/nfs-server-provisioner-0" Dec 13 01:29:15.019426 kubelet[1792]: I1213 01:29:15.019264 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9dqt\" (UniqueName: \"kubernetes.io/projected/a1b34519-668a-49b4-aba3-74414be778a5-kube-api-access-z9dqt\") pod \"nfs-server-provisioner-0\" (UID: \"a1b34519-668a-49b4-aba3-74414be778a5\") " pod="default/nfs-server-provisioner-0" Dec 13 01:29:15.320419 containerd[1466]: time="2024-12-13T01:29:15.320293938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a1b34519-668a-49b4-aba3-74414be778a5,Namespace:default,Attempt:0,}" Dec 13 01:29:15.587413 systemd-networkd[1407]: lxc23c6c674e152: Link UP Dec 13 01:29:15.612519 kernel: eth0: renamed from tmp02925 Dec 13 01:29:15.620695 systemd-networkd[1407]: lxc23c6c674e152: Gained carrier Dec 13 01:29:15.743430 kubelet[1792]: E1213 01:29:15.743365 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:15.918416 containerd[1466]: time="2024-12-13T01:29:15.918081118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:15.918416 containerd[1466]: time="2024-12-13T01:29:15.918213908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:15.918416 containerd[1466]: time="2024-12-13T01:29:15.918257379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:15.918629 containerd[1466]: time="2024-12-13T01:29:15.918386953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:15.944739 systemd[1]: Started cri-containerd-029258efea5a62e50f4b3e5025555f803e158511dc4c5cbf2a7e5243c095d3c8.scope - libcontainer container 029258efea5a62e50f4b3e5025555f803e158511dc4c5cbf2a7e5243c095d3c8. Dec 13 01:29:15.969331 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:29:16.013503 containerd[1466]: time="2024-12-13T01:29:16.013440840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a1b34519-668a-49b4-aba3-74414be778a5,Namespace:default,Attempt:0,} returns sandbox id \"029258efea5a62e50f4b3e5025555f803e158511dc4c5cbf2a7e5243c095d3c8\"" Dec 13 01:29:16.016129 containerd[1466]: time="2024-12-13T01:29:16.016022499Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 01:29:16.743801 kubelet[1792]: E1213 01:29:16.743717 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:17.042782 systemd-networkd[1407]: lxc23c6c674e152: Gained IPv6LL Dec 13 01:29:17.749548 kubelet[1792]: E1213 01:29:17.746376 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:18.750231 kubelet[1792]: E1213 01:29:18.750144 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:19.570762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1782724703.mount: Deactivated successfully. Dec 13 01:29:19.751183 kubelet[1792]: E1213 01:29:19.751053 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:20.752930 kubelet[1792]: E1213 01:29:20.752866 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:21.753941 kubelet[1792]: E1213 01:29:21.753865 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:22.754635 kubelet[1792]: E1213 01:29:22.754575 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:23.755023 kubelet[1792]: E1213 01:29:23.754941 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:24.756165 kubelet[1792]: E1213 01:29:24.756070 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:25.757044 kubelet[1792]: E1213 01:29:25.756728 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:26.757062 kubelet[1792]: E1213 01:29:26.756980 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:26.926769 containerd[1466]: time="2024-12-13T01:29:26.926691563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:26.935933 containerd[1466]: time="2024-12-13T01:29:26.935827758Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Dec 13 01:29:26.941860 containerd[1466]: time="2024-12-13T01:29:26.941762738Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:26.947832 containerd[1466]: time="2024-12-13T01:29:26.947728357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:26.949209 containerd[1466]: time="2024-12-13T01:29:26.949122997Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 10.933023464s" Dec 13 01:29:26.949209 containerd[1466]: time="2024-12-13T01:29:26.949187889Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 01:29:26.952108 containerd[1466]: time="2024-12-13T01:29:26.952052972Z" level=info msg="CreateContainer within sandbox \"029258efea5a62e50f4b3e5025555f803e158511dc4c5cbf2a7e5243c095d3c8\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 01:29:26.967455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount527293940.mount: Deactivated successfully. Dec 13 01:29:26.972792 containerd[1466]: time="2024-12-13T01:29:26.972726573Z" level=info msg="CreateContainer within sandbox \"029258efea5a62e50f4b3e5025555f803e158511dc4c5cbf2a7e5243c095d3c8\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"3cde71cce50e9fbe2291413071fc5d7cae2ca6cf4377e4e7eb3b193e15a880b2\"" Dec 13 01:29:26.973546 containerd[1466]: time="2024-12-13T01:29:26.973507830Z" level=info msg="StartContainer for \"3cde71cce50e9fbe2291413071fc5d7cae2ca6cf4377e4e7eb3b193e15a880b2\"" Dec 13 01:29:27.062914 systemd[1]: Started cri-containerd-3cde71cce50e9fbe2291413071fc5d7cae2ca6cf4377e4e7eb3b193e15a880b2.scope - libcontainer container 3cde71cce50e9fbe2291413071fc5d7cae2ca6cf4377e4e7eb3b193e15a880b2. Dec 13 01:29:27.106706 containerd[1466]: time="2024-12-13T01:29:27.106627801Z" level=info msg="StartContainer for \"3cde71cce50e9fbe2291413071fc5d7cae2ca6cf4377e4e7eb3b193e15a880b2\" returns successfully" Dec 13 01:29:27.757859 kubelet[1792]: E1213 01:29:27.757803 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:27.929175 kubelet[1792]: I1213 01:29:27.929114 1792 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.994812892 podStartE2EDuration="13.929053494s" podCreationTimestamp="2024-12-13 01:29:14 +0000 UTC" firstStartedPulling="2024-12-13 01:29:16.015384438 +0000 UTC m=+44.131964861" lastFinishedPulling="2024-12-13 01:29:26.94962504 +0000 UTC m=+55.066205463" observedRunningTime="2024-12-13 01:29:27.929031022 +0000 UTC m=+56.045611455" watchObservedRunningTime="2024-12-13 01:29:27.929053494 +0000 UTC m=+56.045633917" Dec 13 01:29:28.758850 kubelet[1792]: E1213 01:29:28.758748 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:29.759152 kubelet[1792]: E1213 01:29:29.759092 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:30.759579 kubelet[1792]: E1213 01:29:30.759501 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:31.760315 kubelet[1792]: E1213 01:29:31.760238 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:32.712712 kubelet[1792]: E1213 01:29:32.712633 1792 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:32.760977 kubelet[1792]: E1213 01:29:32.760895 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:33.761566 kubelet[1792]: E1213 01:29:33.761506 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:34.762503 kubelet[1792]: E1213 01:29:34.762428 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:35.763211 kubelet[1792]: E1213 01:29:35.763151 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:36.452064 kubelet[1792]: I1213 01:29:36.452021 1792 topology_manager.go:215] "Topology Admit Handler" podUID="c15348a7-ad99-418e-8c7d-24adefc03aa8" podNamespace="default" podName="test-pod-1" Dec 13 01:29:36.457623 systemd[1]: Created slice kubepods-besteffort-podc15348a7_ad99_418e_8c7d_24adefc03aa8.slice - libcontainer container kubepods-besteffort-podc15348a7_ad99_418e_8c7d_24adefc03aa8.slice. Dec 13 01:29:36.591604 kubelet[1792]: I1213 01:29:36.591557 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-284c16e0-8bb9-440e-b412-3e5d735caea6\" (UniqueName: \"kubernetes.io/nfs/c15348a7-ad99-418e-8c7d-24adefc03aa8-pvc-284c16e0-8bb9-440e-b412-3e5d735caea6\") pod \"test-pod-1\" (UID: \"c15348a7-ad99-418e-8c7d-24adefc03aa8\") " pod="default/test-pod-1" Dec 13 01:29:36.591604 kubelet[1792]: I1213 01:29:36.591610 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdxrw\" (UniqueName: \"kubernetes.io/projected/c15348a7-ad99-418e-8c7d-24adefc03aa8-kube-api-access-sdxrw\") pod \"test-pod-1\" (UID: \"c15348a7-ad99-418e-8c7d-24adefc03aa8\") " pod="default/test-pod-1" Dec 13 01:29:36.722516 kernel: FS-Cache: Loaded Dec 13 01:29:36.763999 kubelet[1792]: E1213 01:29:36.763927 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:36.790927 kernel: RPC: Registered named UNIX socket transport module. Dec 13 01:29:36.790991 kernel: RPC: Registered udp transport module. Dec 13 01:29:36.791011 kernel: RPC: Registered tcp transport module. Dec 13 01:29:36.791561 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 01:29:36.793077 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 01:29:37.069634 kernel: NFS: Registering the id_resolver key type Dec 13 01:29:37.069730 kernel: Key type id_resolver registered Dec 13 01:29:37.069778 kernel: Key type id_legacy registered Dec 13 01:29:37.097931 nfsidmap[3208]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 01:29:37.104175 nfsidmap[3211]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 01:29:37.361436 containerd[1466]: time="2024-12-13T01:29:37.361329305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c15348a7-ad99-418e-8c7d-24adefc03aa8,Namespace:default,Attempt:0,}" Dec 13 01:29:37.388119 systemd-networkd[1407]: lxcb26afce3c83a: Link UP Dec 13 01:29:37.398508 kernel: eth0: renamed from tmp823fd Dec 13 01:29:37.408635 systemd-networkd[1407]: lxcb26afce3c83a: Gained carrier Dec 13 01:29:37.624411 containerd[1466]: time="2024-12-13T01:29:37.624215162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:37.625045 containerd[1466]: time="2024-12-13T01:29:37.624267070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:37.625045 containerd[1466]: time="2024-12-13T01:29:37.625031826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:37.625230 containerd[1466]: time="2024-12-13T01:29:37.625157682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:37.643684 systemd[1]: Started cri-containerd-823fd26d0dad87bafabb454f6ccf0a54262de03564dedc7f5d65b87020a6192e.scope - libcontainer container 823fd26d0dad87bafabb454f6ccf0a54262de03564dedc7f5d65b87020a6192e. Dec 13 01:29:37.655768 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:29:37.679343 containerd[1466]: time="2024-12-13T01:29:37.679301762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c15348a7-ad99-418e-8c7d-24adefc03aa8,Namespace:default,Attempt:0,} returns sandbox id \"823fd26d0dad87bafabb454f6ccf0a54262de03564dedc7f5d65b87020a6192e\"" Dec 13 01:29:37.681406 containerd[1466]: time="2024-12-13T01:29:37.681199375Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:29:37.764831 kubelet[1792]: E1213 01:29:37.764777 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:38.067943 containerd[1466]: time="2024-12-13T01:29:38.067876517Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:38.068650 containerd[1466]: time="2024-12-13T01:29:38.068606267Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 01:29:38.071282 containerd[1466]: time="2024-12-13T01:29:38.071243817Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 390.012272ms" Dec 13 01:29:38.071282 containerd[1466]: time="2024-12-13T01:29:38.071277410Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 01:29:38.073053 containerd[1466]: time="2024-12-13T01:29:38.073020182Z" level=info msg="CreateContainer within sandbox \"823fd26d0dad87bafabb454f6ccf0a54262de03564dedc7f5d65b87020a6192e\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 01:29:38.085804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount714009926.mount: Deactivated successfully. Dec 13 01:29:38.089065 containerd[1466]: time="2024-12-13T01:29:38.089031373Z" level=info msg="CreateContainer within sandbox \"823fd26d0dad87bafabb454f6ccf0a54262de03564dedc7f5d65b87020a6192e\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"045434af442441cccf33ba72adab33426abf1c54f9c722fd4d77fcef83587c32\"" Dec 13 01:29:38.089670 containerd[1466]: time="2024-12-13T01:29:38.089636329Z" level=info msg="StartContainer for \"045434af442441cccf33ba72adab33426abf1c54f9c722fd4d77fcef83587c32\"" Dec 13 01:29:38.123613 systemd[1]: Started cri-containerd-045434af442441cccf33ba72adab33426abf1c54f9c722fd4d77fcef83587c32.scope - libcontainer container 045434af442441cccf33ba72adab33426abf1c54f9c722fd4d77fcef83587c32. Dec 13 01:29:38.148582 containerd[1466]: time="2024-12-13T01:29:38.148527934Z" level=info msg="StartContainer for \"045434af442441cccf33ba72adab33426abf1c54f9c722fd4d77fcef83587c32\" returns successfully" Dec 13 01:29:38.765226 kubelet[1792]: E1213 01:29:38.765160 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:38.944022 kubelet[1792]: I1213 01:29:38.943930 1792 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=23.552987315 podStartE2EDuration="23.943877445s" podCreationTimestamp="2024-12-13 01:29:15 +0000 UTC" firstStartedPulling="2024-12-13 01:29:37.680678738 +0000 UTC m=+65.797259161" lastFinishedPulling="2024-12-13 01:29:38.071568868 +0000 UTC m=+66.188149291" observedRunningTime="2024-12-13 01:29:38.943548879 +0000 UTC m=+67.060129302" watchObservedRunningTime="2024-12-13 01:29:38.943877445 +0000 UTC m=+67.060457868" Dec 13 01:29:39.442687 systemd-networkd[1407]: lxcb26afce3c83a: Gained IPv6LL Dec 13 01:29:39.766133 kubelet[1792]: E1213 01:29:39.766069 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:40.766947 kubelet[1792]: E1213 01:29:40.766889 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:41.767413 kubelet[1792]: E1213 01:29:41.767343 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:42.767767 kubelet[1792]: E1213 01:29:42.767714 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:42.835802 containerd[1466]: time="2024-12-13T01:29:42.835753971Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:29:42.843092 containerd[1466]: time="2024-12-13T01:29:42.843058719Z" level=info msg="StopContainer for \"a49b5bf651d92e96addf83a5696f324720beb0d0a36bc9359dc0716f4f7c1232\" with timeout 2 (s)" Dec 13 01:29:42.843388 containerd[1466]: time="2024-12-13T01:29:42.843363080Z" level=info msg="Stop container \"a49b5bf651d92e96addf83a5696f324720beb0d0a36bc9359dc0716f4f7c1232\" with signal terminated" Dec 13 01:29:42.849880 systemd-networkd[1407]: lxc_health: Link DOWN Dec 13 01:29:42.849889 systemd-networkd[1407]: lxc_health: Lost carrier Dec 13 01:29:42.881853 systemd[1]: cri-containerd-a49b5bf651d92e96addf83a5696f324720beb0d0a36bc9359dc0716f4f7c1232.scope: Deactivated successfully. Dec 13 01:29:42.882129 systemd[1]: cri-containerd-a49b5bf651d92e96addf83a5696f324720beb0d0a36bc9359dc0716f4f7c1232.scope: Consumed 10.534s CPU time. Dec 13 01:29:42.903850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a49b5bf651d92e96addf83a5696f324720beb0d0a36bc9359dc0716f4f7c1232-rootfs.mount: Deactivated successfully. Dec 13 01:29:42.913664 containerd[1466]: time="2024-12-13T01:29:42.913575519Z" level=info msg="shim disconnected" id=a49b5bf651d92e96addf83a5696f324720beb0d0a36bc9359dc0716f4f7c1232 namespace=k8s.io Dec 13 01:29:42.913864 containerd[1466]: time="2024-12-13T01:29:42.913661681Z" level=warning msg="cleaning up after shim disconnected" id=a49b5bf651d92e96addf83a5696f324720beb0d0a36bc9359dc0716f4f7c1232 namespace=k8s.io Dec 13 01:29:42.913864 containerd[1466]: time="2024-12-13T01:29:42.913679364Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:29:42.929135 containerd[1466]: time="2024-12-13T01:29:42.929089071Z" level=info msg="StopContainer for \"a49b5bf651d92e96addf83a5696f324720beb0d0a36bc9359dc0716f4f7c1232\" returns successfully" Dec 13 01:29:42.929832 containerd[1466]: time="2024-12-13T01:29:42.929800266Z" level=info msg="StopPodSandbox for \"4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785\"" Dec 13 01:29:42.929832 containerd[1466]: time="2024-12-13T01:29:42.929835662Z" level=info msg="Container to stop \"1cefee057a92c5333bfea7231db0ba8f4308e2d457aac8acd98c90440680d5da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:29:42.929960 containerd[1466]: time="2024-12-13T01:29:42.929846763Z" level=info msg="Container to stop \"a14ea8a06ba25ceddcf4f943f7403a8b8f959faaa823cb2e5f0c5ad4e0817c24\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:29:42.929960 containerd[1466]: time="2024-12-13T01:29:42.929855780Z" level=info msg="Container to stop \"84d22679f8df122b85da52a821cb4615034ee6add872d798fde94ef078dca0ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:29:42.929960 containerd[1466]: time="2024-12-13T01:29:42.929864336Z" level=info msg="Container to stop \"a49b5bf651d92e96addf83a5696f324720beb0d0a36bc9359dc0716f4f7c1232\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:29:42.929960 containerd[1466]: time="2024-12-13T01:29:42.929873033Z" level=info msg="Container to stop \"4ebf102b411b9f60fa26501083a19255628429614b23b2cab816f1980944ea6f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:29:42.931818 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785-shm.mount: Deactivated successfully. Dec 13 01:29:42.935526 systemd[1]: cri-containerd-4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785.scope: Deactivated successfully. Dec 13 01:29:42.954370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785-rootfs.mount: Deactivated successfully. Dec 13 01:29:42.957956 containerd[1466]: time="2024-12-13T01:29:42.957899777Z" level=info msg="shim disconnected" id=4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785 namespace=k8s.io Dec 13 01:29:42.957956 containerd[1466]: time="2024-12-13T01:29:42.957953407Z" level=warning msg="cleaning up after shim disconnected" id=4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785 namespace=k8s.io Dec 13 01:29:42.958116 containerd[1466]: time="2024-12-13T01:29:42.957962264Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:29:42.971415 containerd[1466]: time="2024-12-13T01:29:42.971379854Z" level=info msg="TearDown network for sandbox \"4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785\" successfully" Dec 13 01:29:42.971415 containerd[1466]: time="2024-12-13T01:29:42.971412345Z" level=info msg="StopPodSandbox for \"4a0c41719a3a5df5a6f61bc563f569bd6ca34505a675d822fac81f3fbb043785\" returns successfully" Dec 13 01:29:43.130050 kubelet[1792]: I1213 01:29:43.129906 1792 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a5934e4d-ce01-4a3b-a088-80117779d8e0-hubble-tls\") pod \"a5934e4d-ce01-4a3b-a088-80117779d8e0\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " Dec 13 01:29:43.130050 kubelet[1792]: I1213 01:29:43.129945 1792 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-bpf-maps\") pod \"a5934e4d-ce01-4a3b-a088-80117779d8e0\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " Dec 13 01:29:43.130050 kubelet[1792]: I1213 01:29:43.129962 1792 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-lib-modules\") pod \"a5934e4d-ce01-4a3b-a088-80117779d8e0\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " Dec 13 01:29:43.130050 kubelet[1792]: I1213 01:29:43.129984 1792 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-host-proc-sys-net\") pod \"a5934e4d-ce01-4a3b-a088-80117779d8e0\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " Dec 13 01:29:43.130050 kubelet[1792]: I1213 01:29:43.130008 1792 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-host-proc-sys-kernel\") pod \"a5934e4d-ce01-4a3b-a088-80117779d8e0\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " Dec 13 01:29:43.130050 kubelet[1792]: I1213 01:29:43.130028 1792 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-xtables-lock\") pod \"a5934e4d-ce01-4a3b-a088-80117779d8e0\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " Dec 13 01:29:43.130321 kubelet[1792]: I1213 01:29:43.130053 1792 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5934e4d-ce01-4a3b-a088-80117779d8e0-cilium-config-path\") pod \"a5934e4d-ce01-4a3b-a088-80117779d8e0\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " Dec 13 01:29:43.130321 kubelet[1792]: I1213 01:29:43.130041 1792 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a5934e4d-ce01-4a3b-a088-80117779d8e0" (UID: "a5934e4d-ce01-4a3b-a088-80117779d8e0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:29:43.130321 kubelet[1792]: I1213 01:29:43.130070 1792 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-cilium-run\") pod \"a5934e4d-ce01-4a3b-a088-80117779d8e0\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " Dec 13 01:29:43.130321 kubelet[1792]: I1213 01:29:43.130090 1792 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-hostproc\") pod \"a5934e4d-ce01-4a3b-a088-80117779d8e0\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " Dec 13 01:29:43.130321 kubelet[1792]: I1213 01:29:43.130107 1792 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-cilium-cgroup\") pod \"a5934e4d-ce01-4a3b-a088-80117779d8e0\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " Dec 13 01:29:43.130449 kubelet[1792]: I1213 01:29:43.130106 1792 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a5934e4d-ce01-4a3b-a088-80117779d8e0" (UID: "a5934e4d-ce01-4a3b-a088-80117779d8e0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:29:43.130449 kubelet[1792]: I1213 01:29:43.130129 1792 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpvnq\" (UniqueName: \"kubernetes.io/projected/a5934e4d-ce01-4a3b-a088-80117779d8e0-kube-api-access-xpvnq\") pod \"a5934e4d-ce01-4a3b-a088-80117779d8e0\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " Dec 13 01:29:43.130449 kubelet[1792]: I1213 01:29:43.130141 1792 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a5934e4d-ce01-4a3b-a088-80117779d8e0" (UID: "a5934e4d-ce01-4a3b-a088-80117779d8e0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:29:43.130449 kubelet[1792]: I1213 01:29:43.130147 1792 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-etc-cni-netd\") pod \"a5934e4d-ce01-4a3b-a088-80117779d8e0\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " Dec 13 01:29:43.130449 kubelet[1792]: I1213 01:29:43.130180 1792 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a5934e4d-ce01-4a3b-a088-80117779d8e0" (UID: "a5934e4d-ce01-4a3b-a088-80117779d8e0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:29:43.130585 kubelet[1792]: I1213 01:29:43.130191 1792 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-cni-path\") pod \"a5934e4d-ce01-4a3b-a088-80117779d8e0\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " Dec 13 01:29:43.130585 kubelet[1792]: I1213 01:29:43.130213 1792 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a5934e4d-ce01-4a3b-a088-80117779d8e0" (UID: "a5934e4d-ce01-4a3b-a088-80117779d8e0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:29:43.130585 kubelet[1792]: I1213 01:29:43.130222 1792 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a5934e4d-ce01-4a3b-a088-80117779d8e0-clustermesh-secrets\") pod \"a5934e4d-ce01-4a3b-a088-80117779d8e0\" (UID: \"a5934e4d-ce01-4a3b-a088-80117779d8e0\") " Dec 13 01:29:43.130585 kubelet[1792]: I1213 01:29:43.130261 1792 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-xtables-lock\") on node \"10.0.0.53\" DevicePath \"\"" Dec 13 01:29:43.130585 kubelet[1792]: I1213 01:29:43.130273 1792 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-etc-cni-netd\") on node \"10.0.0.53\" DevicePath \"\"" Dec 13 01:29:43.130585 kubelet[1792]: I1213 01:29:43.130283 1792 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-host-proc-sys-kernel\") on node \"10.0.0.53\" DevicePath \"\"" Dec 13 01:29:43.130585 kubelet[1792]: I1213 01:29:43.130294 1792 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-bpf-maps\") on node \"10.0.0.53\" DevicePath \"\"" Dec 13 01:29:43.130754 kubelet[1792]: I1213 01:29:43.130304 1792 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-host-proc-sys-net\") on node \"10.0.0.53\" DevicePath \"\"" Dec 13 01:29:43.130754 kubelet[1792]: I1213 01:29:43.130527 1792 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-cni-path" (OuterVolumeSpecName: "cni-path") pod "a5934e4d-ce01-4a3b-a088-80117779d8e0" (UID: "a5934e4d-ce01-4a3b-a088-80117779d8e0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:29:43.130754 kubelet[1792]: I1213 01:29:43.130554 1792 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a5934e4d-ce01-4a3b-a088-80117779d8e0" (UID: "a5934e4d-ce01-4a3b-a088-80117779d8e0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:29:43.130754 kubelet[1792]: I1213 01:29:43.130573 1792 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a5934e4d-ce01-4a3b-a088-80117779d8e0" (UID: "a5934e4d-ce01-4a3b-a088-80117779d8e0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:29:43.130754 kubelet[1792]: I1213 01:29:43.130592 1792 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-hostproc" (OuterVolumeSpecName: "hostproc") pod "a5934e4d-ce01-4a3b-a088-80117779d8e0" (UID: "a5934e4d-ce01-4a3b-a088-80117779d8e0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:29:43.132248 kubelet[1792]: I1213 01:29:43.130126 1792 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a5934e4d-ce01-4a3b-a088-80117779d8e0" (UID: "a5934e4d-ce01-4a3b-a088-80117779d8e0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:29:43.134016 kubelet[1792]: I1213 01:29:43.133765 1792 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5934e4d-ce01-4a3b-a088-80117779d8e0-kube-api-access-xpvnq" (OuterVolumeSpecName: "kube-api-access-xpvnq") pod "a5934e4d-ce01-4a3b-a088-80117779d8e0" (UID: "a5934e4d-ce01-4a3b-a088-80117779d8e0"). InnerVolumeSpecName "kube-api-access-xpvnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:29:43.134016 kubelet[1792]: I1213 01:29:43.133831 1792 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5934e4d-ce01-4a3b-a088-80117779d8e0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a5934e4d-ce01-4a3b-a088-80117779d8e0" (UID: "a5934e4d-ce01-4a3b-a088-80117779d8e0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:29:43.134934 kubelet[1792]: I1213 01:29:43.134898 1792 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5934e4d-ce01-4a3b-a088-80117779d8e0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a5934e4d-ce01-4a3b-a088-80117779d8e0" (UID: "a5934e4d-ce01-4a3b-a088-80117779d8e0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:29:43.135553 systemd[1]: var-lib-kubelet-pods-a5934e4d\x2dce01\x2d4a3b\x2da088\x2d80117779d8e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxpvnq.mount: Deactivated successfully. Dec 13 01:29:43.135648 kubelet[1792]: I1213 01:29:43.135579 1792 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5934e4d-ce01-4a3b-a088-80117779d8e0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a5934e4d-ce01-4a3b-a088-80117779d8e0" (UID: "a5934e4d-ce01-4a3b-a088-80117779d8e0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:29:43.135662 systemd[1]: var-lib-kubelet-pods-a5934e4d\x2dce01\x2d4a3b\x2da088\x2d80117779d8e0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:29:43.135738 systemd[1]: var-lib-kubelet-pods-a5934e4d\x2dce01\x2d4a3b\x2da088\x2d80117779d8e0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:29:43.230735 kubelet[1792]: I1213 01:29:43.230683 1792 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xpvnq\" (UniqueName: \"kubernetes.io/projected/a5934e4d-ce01-4a3b-a088-80117779d8e0-kube-api-access-xpvnq\") on node \"10.0.0.53\" DevicePath \"\"" Dec 13 01:29:43.230735 kubelet[1792]: I1213 01:29:43.230722 1792 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-cni-path\") on node \"10.0.0.53\" DevicePath \"\"" Dec 13 01:29:43.230735 kubelet[1792]: I1213 01:29:43.230734 1792 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a5934e4d-ce01-4a3b-a088-80117779d8e0-clustermesh-secrets\") on node \"10.0.0.53\" DevicePath \"\"" Dec 13 01:29:43.230735 kubelet[1792]: I1213 01:29:43.230743 1792 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a5934e4d-ce01-4a3b-a088-80117779d8e0-hubble-tls\") on node \"10.0.0.53\" DevicePath \"\"" Dec 13 01:29:43.230735 kubelet[1792]: I1213 01:29:43.230751 1792 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-lib-modules\") on node \"10.0.0.53\" DevicePath \"\"" Dec 13 01:29:43.230735 kubelet[1792]: I1213 01:29:43.230761 1792 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5934e4d-ce01-4a3b-a088-80117779d8e0-cilium-config-path\") on node \"10.0.0.53\" DevicePath \"\"" Dec 13 01:29:43.230735 kubelet[1792]: I1213 01:29:43.230770 1792 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-cilium-run\") on node \"10.0.0.53\" DevicePath \"\"" Dec 13 01:29:43.231066 kubelet[1792]: I1213 01:29:43.230780 1792 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-hostproc\") on node \"10.0.0.53\" DevicePath \"\"" Dec 13 01:29:43.231066 kubelet[1792]: I1213 01:29:43.230789 1792 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a5934e4d-ce01-4a3b-a088-80117779d8e0-cilium-cgroup\") on node \"10.0.0.53\" DevicePath \"\"" Dec 13 01:29:43.610619 kubelet[1792]: E1213 01:29:43.610584 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:43.617745 systemd[1]: Removed slice kubepods-burstable-poda5934e4d_ce01_4a3b_a088_80117779d8e0.slice - libcontainer container kubepods-burstable-poda5934e4d_ce01_4a3b_a088_80117779d8e0.slice. Dec 13 01:29:43.617840 systemd[1]: kubepods-burstable-poda5934e4d_ce01_4a3b_a088_80117779d8e0.slice: Consumed 10.662s CPU time. Dec 13 01:29:43.768635 kubelet[1792]: E1213 01:29:43.768567 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:43.814936 kubelet[1792]: E1213 01:29:43.814913 1792 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:29:43.951254 kubelet[1792]: I1213 01:29:43.951222 1792 scope.go:117] "RemoveContainer" containerID="a49b5bf651d92e96addf83a5696f324720beb0d0a36bc9359dc0716f4f7c1232" Dec 13 01:29:43.952620 containerd[1466]: time="2024-12-13T01:29:43.952585498Z" level=info msg="RemoveContainer for \"a49b5bf651d92e96addf83a5696f324720beb0d0a36bc9359dc0716f4f7c1232\"" Dec 13 01:29:43.956275 containerd[1466]: time="2024-12-13T01:29:43.956233234Z" level=info msg="RemoveContainer for \"a49b5bf651d92e96addf83a5696f324720beb0d0a36bc9359dc0716f4f7c1232\" returns successfully" Dec 13 01:29:43.956508 kubelet[1792]: I1213 01:29:43.956473 1792 scope.go:117] "RemoveContainer" containerID="84d22679f8df122b85da52a821cb4615034ee6add872d798fde94ef078dca0ea" Dec 13 01:29:43.957666 containerd[1466]: time="2024-12-13T01:29:43.957643199Z" level=info msg="RemoveContainer for \"84d22679f8df122b85da52a821cb4615034ee6add872d798fde94ef078dca0ea\"" Dec 13 01:29:43.960894 containerd[1466]: time="2024-12-13T01:29:43.960855648Z" level=info msg="RemoveContainer for \"84d22679f8df122b85da52a821cb4615034ee6add872d798fde94ef078dca0ea\" returns successfully" Dec 13 01:29:43.961046 kubelet[1792]: I1213 01:29:43.961019 1792 scope.go:117] "RemoveContainer" containerID="4ebf102b411b9f60fa26501083a19255628429614b23b2cab816f1980944ea6f" Dec 13 01:29:43.961926 containerd[1466]: time="2024-12-13T01:29:43.961888957Z" level=info msg="RemoveContainer for \"4ebf102b411b9f60fa26501083a19255628429614b23b2cab816f1980944ea6f\"" Dec 13 01:29:43.967027 containerd[1466]: time="2024-12-13T01:29:43.966990840Z" level=info msg="RemoveContainer for \"4ebf102b411b9f60fa26501083a19255628429614b23b2cab816f1980944ea6f\" returns successfully" Dec 13 01:29:43.967236 kubelet[1792]: I1213 01:29:43.967159 1792 scope.go:117] "RemoveContainer" containerID="a14ea8a06ba25ceddcf4f943f7403a8b8f959faaa823cb2e5f0c5ad4e0817c24" Dec 13 01:29:43.968109 containerd[1466]: time="2024-12-13T01:29:43.968069905Z" level=info msg="RemoveContainer for \"a14ea8a06ba25ceddcf4f943f7403a8b8f959faaa823cb2e5f0c5ad4e0817c24\"" Dec 13 01:29:43.971022 containerd[1466]: time="2024-12-13T01:29:43.970993271Z" level=info msg="RemoveContainer for \"a14ea8a06ba25ceddcf4f943f7403a8b8f959faaa823cb2e5f0c5ad4e0817c24\" returns successfully" Dec 13 01:29:43.971174 kubelet[1792]: I1213 01:29:43.971151 1792 scope.go:117] "RemoveContainer" containerID="1cefee057a92c5333bfea7231db0ba8f4308e2d457aac8acd98c90440680d5da" Dec 13 01:29:43.972057 containerd[1466]: time="2024-12-13T01:29:43.972029436Z" level=info msg="RemoveContainer for \"1cefee057a92c5333bfea7231db0ba8f4308e2d457aac8acd98c90440680d5da\"" Dec 13 01:29:43.974693 containerd[1466]: time="2024-12-13T01:29:43.974665362Z" level=info msg="RemoveContainer for \"1cefee057a92c5333bfea7231db0ba8f4308e2d457aac8acd98c90440680d5da\" returns successfully" Dec 13 01:29:44.768714 kubelet[1792]: E1213 01:29:44.768676 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:45.233773 kubelet[1792]: I1213 01:29:45.233732 1792 setters.go:568] "Node became not ready" node="10.0.0.53" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:29:45Z","lastTransitionTime":"2024-12-13T01:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:29:45.285401 kubelet[1792]: I1213 01:29:45.285349 1792 topology_manager.go:215] "Topology Admit Handler" podUID="b7e2a794-72b3-488d-9227-0f551e4bc0af" podNamespace="kube-system" podName="cilium-operator-5cc964979-f29r8" Dec 13 01:29:45.285401 kubelet[1792]: E1213 01:29:45.285405 1792 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5934e4d-ce01-4a3b-a088-80117779d8e0" containerName="mount-cgroup" Dec 13 01:29:45.285401 kubelet[1792]: E1213 01:29:45.285416 1792 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5934e4d-ce01-4a3b-a088-80117779d8e0" containerName="apply-sysctl-overwrites" Dec 13 01:29:45.285621 kubelet[1792]: E1213 01:29:45.285423 1792 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5934e4d-ce01-4a3b-a088-80117779d8e0" containerName="mount-bpf-fs" Dec 13 01:29:45.285621 kubelet[1792]: E1213 01:29:45.285430 1792 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5934e4d-ce01-4a3b-a088-80117779d8e0" containerName="clean-cilium-state" Dec 13 01:29:45.285621 kubelet[1792]: E1213 01:29:45.285437 1792 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5934e4d-ce01-4a3b-a088-80117779d8e0" containerName="cilium-agent" Dec 13 01:29:45.285621 kubelet[1792]: I1213 01:29:45.285459 1792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5934e4d-ce01-4a3b-a088-80117779d8e0" containerName="cilium-agent" Dec 13 01:29:45.285852 kubelet[1792]: I1213 01:29:45.285815 1792 topology_manager.go:215] "Topology Admit Handler" podUID="2b63208b-6bc6-436c-940a-1be336490033" podNamespace="kube-system" podName="cilium-jlgxn" Dec 13 01:29:45.291555 systemd[1]: Created slice kubepods-besteffort-podb7e2a794_72b3_488d_9227_0f551e4bc0af.slice - libcontainer container kubepods-besteffort-podb7e2a794_72b3_488d_9227_0f551e4bc0af.slice. Dec 13 01:29:45.295615 systemd[1]: Created slice kubepods-burstable-pod2b63208b_6bc6_436c_940a_1be336490033.slice - libcontainer container kubepods-burstable-pod2b63208b_6bc6_436c_940a_1be336490033.slice. Dec 13 01:29:45.441464 kubelet[1792]: I1213 01:29:45.441417 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b63208b-6bc6-436c-940a-1be336490033-lib-modules\") pod \"cilium-jlgxn\" (UID: \"2b63208b-6bc6-436c-940a-1be336490033\") " pod="kube-system/cilium-jlgxn" Dec 13 01:29:45.441618 kubelet[1792]: I1213 01:29:45.441541 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2b63208b-6bc6-436c-940a-1be336490033-clustermesh-secrets\") pod \"cilium-jlgxn\" (UID: \"2b63208b-6bc6-436c-940a-1be336490033\") " pod="kube-system/cilium-jlgxn" Dec 13 01:29:45.441618 kubelet[1792]: I1213 01:29:45.441576 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2b63208b-6bc6-436c-940a-1be336490033-cni-path\") pod \"cilium-jlgxn\" (UID: \"2b63208b-6bc6-436c-940a-1be336490033\") " pod="kube-system/cilium-jlgxn" Dec 13 01:29:45.441618 kubelet[1792]: I1213 01:29:45.441595 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2b63208b-6bc6-436c-940a-1be336490033-cilium-ipsec-secrets\") pod \"cilium-jlgxn\" (UID: \"2b63208b-6bc6-436c-940a-1be336490033\") " pod="kube-system/cilium-jlgxn" Dec 13 01:29:45.441618 kubelet[1792]: I1213 01:29:45.441617 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2b63208b-6bc6-436c-940a-1be336490033-hostproc\") pod \"cilium-jlgxn\" (UID: \"2b63208b-6bc6-436c-940a-1be336490033\") " pod="kube-system/cilium-jlgxn" Dec 13 01:29:45.441815 kubelet[1792]: I1213 01:29:45.441646 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b63208b-6bc6-436c-940a-1be336490033-etc-cni-netd\") pod \"cilium-jlgxn\" (UID: \"2b63208b-6bc6-436c-940a-1be336490033\") " pod="kube-system/cilium-jlgxn" Dec 13 01:29:45.441815 kubelet[1792]: I1213 01:29:45.441672 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvt7t\" (UniqueName: \"kubernetes.io/projected/2b63208b-6bc6-436c-940a-1be336490033-kube-api-access-lvt7t\") pod \"cilium-jlgxn\" (UID: \"2b63208b-6bc6-436c-940a-1be336490033\") " pod="kube-system/cilium-jlgxn" Dec 13 01:29:45.441815 kubelet[1792]: I1213 01:29:45.441694 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xfks\" (UniqueName: \"kubernetes.io/projected/b7e2a794-72b3-488d-9227-0f551e4bc0af-kube-api-access-4xfks\") pod \"cilium-operator-5cc964979-f29r8\" (UID: \"b7e2a794-72b3-488d-9227-0f551e4bc0af\") " pod="kube-system/cilium-operator-5cc964979-f29r8" Dec 13 01:29:45.441815 kubelet[1792]: I1213 01:29:45.441725 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2b63208b-6bc6-436c-940a-1be336490033-cilium-run\") pod \"cilium-jlgxn\" (UID: \"2b63208b-6bc6-436c-940a-1be336490033\") " pod="kube-system/cilium-jlgxn" Dec 13 01:29:45.441815 kubelet[1792]: I1213 01:29:45.441762 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2b63208b-6bc6-436c-940a-1be336490033-host-proc-sys-kernel\") pod \"cilium-jlgxn\" (UID: \"2b63208b-6bc6-436c-940a-1be336490033\") " pod="kube-system/cilium-jlgxn" Dec 13 01:29:45.441930 kubelet[1792]: I1213 01:29:45.441790 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7e2a794-72b3-488d-9227-0f551e4bc0af-cilium-config-path\") pod \"cilium-operator-5cc964979-f29r8\" (UID: \"b7e2a794-72b3-488d-9227-0f551e4bc0af\") " pod="kube-system/cilium-operator-5cc964979-f29r8" Dec 13 01:29:45.441930 kubelet[1792]: I1213 01:29:45.441809 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b63208b-6bc6-436c-940a-1be336490033-xtables-lock\") pod \"cilium-jlgxn\" (UID: \"2b63208b-6bc6-436c-940a-1be336490033\") " pod="kube-system/cilium-jlgxn" Dec 13 01:29:45.441930 kubelet[1792]: I1213 01:29:45.441840 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b63208b-6bc6-436c-940a-1be336490033-cilium-config-path\") pod \"cilium-jlgxn\" (UID: \"2b63208b-6bc6-436c-940a-1be336490033\") " pod="kube-system/cilium-jlgxn" Dec 13 01:29:45.441930 kubelet[1792]: I1213 01:29:45.441886 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2b63208b-6bc6-436c-940a-1be336490033-hubble-tls\") pod \"cilium-jlgxn\" (UID: \"2b63208b-6bc6-436c-940a-1be336490033\") " pod="kube-system/cilium-jlgxn" Dec 13 01:29:45.442022 kubelet[1792]: I1213 01:29:45.441934 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2b63208b-6bc6-436c-940a-1be336490033-bpf-maps\") pod \"cilium-jlgxn\" (UID: \"2b63208b-6bc6-436c-940a-1be336490033\") " pod="kube-system/cilium-jlgxn" Dec 13 01:29:45.442022 kubelet[1792]: I1213 01:29:45.441958 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2b63208b-6bc6-436c-940a-1be336490033-cilium-cgroup\") pod \"cilium-jlgxn\" (UID: \"2b63208b-6bc6-436c-940a-1be336490033\") " pod="kube-system/cilium-jlgxn" Dec 13 01:29:45.442022 kubelet[1792]: I1213 01:29:45.441993 1792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2b63208b-6bc6-436c-940a-1be336490033-host-proc-sys-net\") pod \"cilium-jlgxn\" (UID: \"2b63208b-6bc6-436c-940a-1be336490033\") " pod="kube-system/cilium-jlgxn" Dec 13 01:29:45.594162 kubelet[1792]: E1213 01:29:45.594070 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:45.594909 containerd[1466]: time="2024-12-13T01:29:45.594554322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-f29r8,Uid:b7e2a794-72b3-488d-9227-0f551e4bc0af,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:45.607589 kubelet[1792]: E1213 01:29:45.607495 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:45.608146 containerd[1466]: time="2024-12-13T01:29:45.608097453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jlgxn,Uid:2b63208b-6bc6-436c-940a-1be336490033,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:45.612647 kubelet[1792]: I1213 01:29:45.612619 1792 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a5934e4d-ce01-4a3b-a088-80117779d8e0" path="/var/lib/kubelet/pods/a5934e4d-ce01-4a3b-a088-80117779d8e0/volumes" Dec 13 01:29:45.615294 containerd[1466]: time="2024-12-13T01:29:45.615199068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:45.615294 containerd[1466]: time="2024-12-13T01:29:45.615272165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:45.615468 containerd[1466]: time="2024-12-13T01:29:45.615314645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:45.615506 containerd[1466]: time="2024-12-13T01:29:45.615437475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:45.632656 containerd[1466]: time="2024-12-13T01:29:45.631773428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:45.632656 containerd[1466]: time="2024-12-13T01:29:45.632588247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:45.632656 containerd[1466]: time="2024-12-13T01:29:45.632602634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:45.632839 containerd[1466]: time="2024-12-13T01:29:45.632690348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:45.640665 systemd[1]: Started cri-containerd-b19f545889d7ce0b95d77dbb42ba6d0ca57732e8356db0c9ce099e9ebb694e83.scope - libcontainer container b19f545889d7ce0b95d77dbb42ba6d0ca57732e8356db0c9ce099e9ebb694e83. Dec 13 01:29:45.646734 systemd[1]: Started cri-containerd-e831bb9a68295304bef8a6b2cce2ce0c82c2a7f579117db686542769c97526c0.scope - libcontainer container e831bb9a68295304bef8a6b2cce2ce0c82c2a7f579117db686542769c97526c0. Dec 13 01:29:45.670435 containerd[1466]: time="2024-12-13T01:29:45.670394395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jlgxn,Uid:2b63208b-6bc6-436c-940a-1be336490033,Namespace:kube-system,Attempt:0,} returns sandbox id \"e831bb9a68295304bef8a6b2cce2ce0c82c2a7f579117db686542769c97526c0\"" Dec 13 01:29:45.671417 kubelet[1792]: E1213 01:29:45.671392 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:45.673278 containerd[1466]: time="2024-12-13T01:29:45.673167739Z" level=info msg="CreateContainer within sandbox \"e831bb9a68295304bef8a6b2cce2ce0c82c2a7f579117db686542769c97526c0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:29:45.684367 containerd[1466]: time="2024-12-13T01:29:45.684314163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-f29r8,Uid:b7e2a794-72b3-488d-9227-0f551e4bc0af,Namespace:kube-system,Attempt:0,} returns sandbox id \"b19f545889d7ce0b95d77dbb42ba6d0ca57732e8356db0c9ce099e9ebb694e83\"" Dec 13 01:29:45.684940 kubelet[1792]: E1213 01:29:45.684915 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:45.686111 containerd[1466]: time="2024-12-13T01:29:45.686076200Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:29:45.690540 containerd[1466]: time="2024-12-13T01:29:45.690501493Z" level=info msg="CreateContainer within sandbox \"e831bb9a68295304bef8a6b2cce2ce0c82c2a7f579117db686542769c97526c0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fc592d07eff5a322d36a8c6e2f17709ad78901bd375940befbb1c7fee8a1faf7\"" Dec 13 01:29:45.690881 containerd[1466]: time="2024-12-13T01:29:45.690846440Z" level=info msg="StartContainer for \"fc592d07eff5a322d36a8c6e2f17709ad78901bd375940befbb1c7fee8a1faf7\"" Dec 13 01:29:45.716599 systemd[1]: Started cri-containerd-fc592d07eff5a322d36a8c6e2f17709ad78901bd375940befbb1c7fee8a1faf7.scope - libcontainer container fc592d07eff5a322d36a8c6e2f17709ad78901bd375940befbb1c7fee8a1faf7. Dec 13 01:29:45.741892 containerd[1466]: time="2024-12-13T01:29:45.741847747Z" level=info msg="StartContainer for \"fc592d07eff5a322d36a8c6e2f17709ad78901bd375940befbb1c7fee8a1faf7\" returns successfully" Dec 13 01:29:45.751344 systemd[1]: cri-containerd-fc592d07eff5a322d36a8c6e2f17709ad78901bd375940befbb1c7fee8a1faf7.scope: Deactivated successfully. Dec 13 01:29:45.769678 kubelet[1792]: E1213 01:29:45.769617 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:45.782962 containerd[1466]: time="2024-12-13T01:29:45.782895848Z" level=info msg="shim disconnected" id=fc592d07eff5a322d36a8c6e2f17709ad78901bd375940befbb1c7fee8a1faf7 namespace=k8s.io Dec 13 01:29:45.782962 containerd[1466]: time="2024-12-13T01:29:45.782955279Z" level=warning msg="cleaning up after shim disconnected" id=fc592d07eff5a322d36a8c6e2f17709ad78901bd375940befbb1c7fee8a1faf7 namespace=k8s.io Dec 13 01:29:45.782962 containerd[1466]: time="2024-12-13T01:29:45.782963926Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:29:45.956812 kubelet[1792]: E1213 01:29:45.956783 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:45.958981 containerd[1466]: time="2024-12-13T01:29:45.958911046Z" level=info msg="CreateContainer within sandbox \"e831bb9a68295304bef8a6b2cce2ce0c82c2a7f579117db686542769c97526c0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:29:45.970724 containerd[1466]: time="2024-12-13T01:29:45.970685930Z" level=info msg="CreateContainer within sandbox \"e831bb9a68295304bef8a6b2cce2ce0c82c2a7f579117db686542769c97526c0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1dd29a63aaabe3dd0f8c5ed36caec3ffcbda48ada79ace4f147850445cf03d8e\"" Dec 13 01:29:45.973810 containerd[1466]: time="2024-12-13T01:29:45.973696008Z" level=info msg="StartContainer for \"1dd29a63aaabe3dd0f8c5ed36caec3ffcbda48ada79ace4f147850445cf03d8e\"" Dec 13 01:29:45.999617 systemd[1]: Started cri-containerd-1dd29a63aaabe3dd0f8c5ed36caec3ffcbda48ada79ace4f147850445cf03d8e.scope - libcontainer container 1dd29a63aaabe3dd0f8c5ed36caec3ffcbda48ada79ace4f147850445cf03d8e. Dec 13 01:29:46.023759 containerd[1466]: time="2024-12-13T01:29:46.023720891Z" level=info msg="StartContainer for \"1dd29a63aaabe3dd0f8c5ed36caec3ffcbda48ada79ace4f147850445cf03d8e\" returns successfully" Dec 13 01:29:46.031061 systemd[1]: cri-containerd-1dd29a63aaabe3dd0f8c5ed36caec3ffcbda48ada79ace4f147850445cf03d8e.scope: Deactivated successfully. Dec 13 01:29:46.055178 containerd[1466]: time="2024-12-13T01:29:46.055098154Z" level=info msg="shim disconnected" id=1dd29a63aaabe3dd0f8c5ed36caec3ffcbda48ada79ace4f147850445cf03d8e namespace=k8s.io Dec 13 01:29:46.055178 containerd[1466]: time="2024-12-13T01:29:46.055155302Z" level=warning msg="cleaning up after shim disconnected" id=1dd29a63aaabe3dd0f8c5ed36caec3ffcbda48ada79ace4f147850445cf03d8e namespace=k8s.io Dec 13 01:29:46.055178 containerd[1466]: time="2024-12-13T01:29:46.055164349Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:29:46.770424 kubelet[1792]: E1213 01:29:46.770382 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:46.961195 kubelet[1792]: E1213 01:29:46.961168 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:46.962777 containerd[1466]: time="2024-12-13T01:29:46.962745282Z" level=info msg="CreateContainer within sandbox \"e831bb9a68295304bef8a6b2cce2ce0c82c2a7f579117db686542769c97526c0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:29:46.978431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1841592536.mount: Deactivated successfully. Dec 13 01:29:46.979799 containerd[1466]: time="2024-12-13T01:29:46.979762711Z" level=info msg="CreateContainer within sandbox \"e831bb9a68295304bef8a6b2cce2ce0c82c2a7f579117db686542769c97526c0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d39889d296b408d1261fdd7578712123546212bae91f6de5df4fc8311c890fc0\"" Dec 13 01:29:46.980333 containerd[1466]: time="2024-12-13T01:29:46.980285002Z" level=info msg="StartContainer for \"d39889d296b408d1261fdd7578712123546212bae91f6de5df4fc8311c890fc0\"" Dec 13 01:29:47.013677 systemd[1]: Started cri-containerd-d39889d296b408d1261fdd7578712123546212bae91f6de5df4fc8311c890fc0.scope - libcontainer container d39889d296b408d1261fdd7578712123546212bae91f6de5df4fc8311c890fc0. Dec 13 01:29:47.048403 containerd[1466]: time="2024-12-13T01:29:47.046703811Z" level=info msg="StartContainer for \"d39889d296b408d1261fdd7578712123546212bae91f6de5df4fc8311c890fc0\" returns successfully" Dec 13 01:29:47.049275 systemd[1]: cri-containerd-d39889d296b408d1261fdd7578712123546212bae91f6de5df4fc8311c890fc0.scope: Deactivated successfully. Dec 13 01:29:47.078105 containerd[1466]: time="2024-12-13T01:29:47.078020598Z" level=info msg="shim disconnected" id=d39889d296b408d1261fdd7578712123546212bae91f6de5df4fc8311c890fc0 namespace=k8s.io Dec 13 01:29:47.078105 containerd[1466]: time="2024-12-13T01:29:47.078075912Z" level=warning msg="cleaning up after shim disconnected" id=d39889d296b408d1261fdd7578712123546212bae91f6de5df4fc8311c890fc0 namespace=k8s.io Dec 13 01:29:47.078105 containerd[1466]: time="2024-12-13T01:29:47.078085179Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:29:47.547740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d39889d296b408d1261fdd7578712123546212bae91f6de5df4fc8311c890fc0-rootfs.mount: Deactivated successfully. Dec 13 01:29:47.770792 kubelet[1792]: E1213 01:29:47.770743 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:47.967945 kubelet[1792]: E1213 01:29:47.967913 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:47.971419 containerd[1466]: time="2024-12-13T01:29:47.970960217Z" level=info msg="CreateContainer within sandbox \"e831bb9a68295304bef8a6b2cce2ce0c82c2a7f579117db686542769c97526c0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:29:48.008412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4265109433.mount: Deactivated successfully. Dec 13 01:29:48.013894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1349085657.mount: Deactivated successfully. Dec 13 01:29:48.025770 containerd[1466]: time="2024-12-13T01:29:48.025703477Z" level=info msg="CreateContainer within sandbox \"e831bb9a68295304bef8a6b2cce2ce0c82c2a7f579117db686542769c97526c0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6af919784011333e72cd328a87da247956f6eda1246efc3209049b3e2852d617\"" Dec 13 01:29:48.026550 containerd[1466]: time="2024-12-13T01:29:48.026497647Z" level=info msg="StartContainer for \"6af919784011333e72cd328a87da247956f6eda1246efc3209049b3e2852d617\"" Dec 13 01:29:48.061622 systemd[1]: Started cri-containerd-6af919784011333e72cd328a87da247956f6eda1246efc3209049b3e2852d617.scope - libcontainer container 6af919784011333e72cd328a87da247956f6eda1246efc3209049b3e2852d617. Dec 13 01:29:48.090957 systemd[1]: cri-containerd-6af919784011333e72cd328a87da247956f6eda1246efc3209049b3e2852d617.scope: Deactivated successfully. Dec 13 01:29:48.093357 containerd[1466]: time="2024-12-13T01:29:48.093199772Z" level=info msg="StartContainer for \"6af919784011333e72cd328a87da247956f6eda1246efc3209049b3e2852d617\" returns successfully" Dec 13 01:29:48.190229 containerd[1466]: time="2024-12-13T01:29:48.190152544Z" level=info msg="shim disconnected" id=6af919784011333e72cd328a87da247956f6eda1246efc3209049b3e2852d617 namespace=k8s.io Dec 13 01:29:48.190229 containerd[1466]: time="2024-12-13T01:29:48.190220191Z" level=warning msg="cleaning up after shim disconnected" id=6af919784011333e72cd328a87da247956f6eda1246efc3209049b3e2852d617 namespace=k8s.io Dec 13 01:29:48.190229 containerd[1466]: time="2024-12-13T01:29:48.190230461Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:29:48.204903 containerd[1466]: time="2024-12-13T01:29:48.204838989Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:29:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:29:48.547992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6af919784011333e72cd328a87da247956f6eda1246efc3209049b3e2852d617-rootfs.mount: Deactivated successfully. Dec 13 01:29:48.771314 kubelet[1792]: E1213 01:29:48.771232 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:48.816345 kubelet[1792]: E1213 01:29:48.816215 1792 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:29:48.837061 containerd[1466]: time="2024-12-13T01:29:48.836997104Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:48.837785 containerd[1466]: time="2024-12-13T01:29:48.837711965Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907225" Dec 13 01:29:48.838884 containerd[1466]: time="2024-12-13T01:29:48.838849259Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:48.840293 containerd[1466]: time="2024-12-13T01:29:48.840226283Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.154115408s" Dec 13 01:29:48.840354 containerd[1466]: time="2024-12-13T01:29:48.840289672Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 01:29:48.842390 containerd[1466]: time="2024-12-13T01:29:48.842347162Z" level=info msg="CreateContainer within sandbox \"b19f545889d7ce0b95d77dbb42ba6d0ca57732e8356db0c9ce099e9ebb694e83\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:29:48.859604 containerd[1466]: time="2024-12-13T01:29:48.859559145Z" level=info msg="CreateContainer within sandbox \"b19f545889d7ce0b95d77dbb42ba6d0ca57732e8356db0c9ce099e9ebb694e83\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b6b8ef60e10c16d77d2753d7b53c2270f9839cd87c0c71adfcf93419053d616c\"" Dec 13 01:29:48.860204 containerd[1466]: time="2024-12-13T01:29:48.860170592Z" level=info msg="StartContainer for \"b6b8ef60e10c16d77d2753d7b53c2270f9839cd87c0c71adfcf93419053d616c\"" Dec 13 01:29:48.897678 systemd[1]: Started cri-containerd-b6b8ef60e10c16d77d2753d7b53c2270f9839cd87c0c71adfcf93419053d616c.scope - libcontainer container b6b8ef60e10c16d77d2753d7b53c2270f9839cd87c0c71adfcf93419053d616c. Dec 13 01:29:48.927454 containerd[1466]: time="2024-12-13T01:29:48.927396700Z" level=info msg="StartContainer for \"b6b8ef60e10c16d77d2753d7b53c2270f9839cd87c0c71adfcf93419053d616c\" returns successfully" Dec 13 01:29:48.974843 kubelet[1792]: E1213 01:29:48.974783 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:48.976025 kubelet[1792]: E1213 01:29:48.976006 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:48.977141 containerd[1466]: time="2024-12-13T01:29:48.977093622Z" level=info msg="CreateContainer within sandbox \"e831bb9a68295304bef8a6b2cce2ce0c82c2a7f579117db686542769c97526c0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:29:48.998668 containerd[1466]: time="2024-12-13T01:29:48.998534107Z" level=info msg="CreateContainer within sandbox \"e831bb9a68295304bef8a6b2cce2ce0c82c2a7f579117db686542769c97526c0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5b607e7653b35b77fa89b85021f582f070aeabcbb22937ce909db0f2dc20ce83\"" Dec 13 01:29:49.000323 kubelet[1792]: I1213 01:29:49.000299 1792 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-f29r8" podStartSLOduration=0.845452548 podStartE2EDuration="4.000260356s" podCreationTimestamp="2024-12-13 01:29:45 +0000 UTC" firstStartedPulling="2024-12-13 01:29:45.685767169 +0000 UTC m=+73.802347592" lastFinishedPulling="2024-12-13 01:29:48.840574987 +0000 UTC m=+76.957155400" observedRunningTime="2024-12-13 01:29:48.999720764 +0000 UTC m=+77.116301187" watchObservedRunningTime="2024-12-13 01:29:49.000260356 +0000 UTC m=+77.116840780" Dec 13 01:29:49.000601 containerd[1466]: time="2024-12-13T01:29:49.000557514Z" level=info msg="StartContainer for \"5b607e7653b35b77fa89b85021f582f070aeabcbb22937ce909db0f2dc20ce83\"" Dec 13 01:29:49.036839 systemd[1]: Started cri-containerd-5b607e7653b35b77fa89b85021f582f070aeabcbb22937ce909db0f2dc20ce83.scope - libcontainer container 5b607e7653b35b77fa89b85021f582f070aeabcbb22937ce909db0f2dc20ce83. Dec 13 01:29:49.070468 containerd[1466]: time="2024-12-13T01:29:49.069727626Z" level=info msg="StartContainer for \"5b607e7653b35b77fa89b85021f582f070aeabcbb22937ce909db0f2dc20ce83\" returns successfully" Dec 13 01:29:49.501524 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 01:29:49.772091 kubelet[1792]: E1213 01:29:49.771956 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:49.983604 kubelet[1792]: E1213 01:29:49.983572 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:49.983914 kubelet[1792]: E1213 01:29:49.983888 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:49.995871 kubelet[1792]: I1213 01:29:49.995832 1792 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jlgxn" podStartSLOduration=4.99578874 podStartE2EDuration="4.99578874s" podCreationTimestamp="2024-12-13 01:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:29:49.995305714 +0000 UTC m=+78.111886137" watchObservedRunningTime="2024-12-13 01:29:49.99578874 +0000 UTC m=+78.112369163" Dec 13 01:29:50.772202 kubelet[1792]: E1213 01:29:50.772130 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:51.608852 kubelet[1792]: E1213 01:29:51.608819 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:51.773107 kubelet[1792]: E1213 01:29:51.773063 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:52.654162 systemd-networkd[1407]: lxc_health: Link UP Dec 13 01:29:52.668759 systemd-networkd[1407]: lxc_health: Gained carrier Dec 13 01:29:52.713082 kubelet[1792]: E1213 01:29:52.713013 1792 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:52.774301 kubelet[1792]: E1213 01:29:52.774226 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:53.613296 kubelet[1792]: E1213 01:29:53.613255 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:53.774470 kubelet[1792]: E1213 01:29:53.774403 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:53.950962 systemd[1]: run-containerd-runc-k8s.io-5b607e7653b35b77fa89b85021f582f070aeabcbb22937ce909db0f2dc20ce83-runc.yl6c7Q.mount: Deactivated successfully. Dec 13 01:29:53.992545 kubelet[1792]: E1213 01:29:53.992375 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:54.738716 systemd-networkd[1407]: lxc_health: Gained IPv6LL Dec 13 01:29:54.774854 kubelet[1792]: E1213 01:29:54.774783 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:54.994609 kubelet[1792]: E1213 01:29:54.994500 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:55.775563 kubelet[1792]: E1213 01:29:55.775511 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:56.775973 kubelet[1792]: E1213 01:29:56.775901 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:57.776563 kubelet[1792]: E1213 01:29:57.776391 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:58.776708 kubelet[1792]: E1213 01:29:58.776608 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:29:59.777353 kubelet[1792]: E1213 01:29:59.777229 1792 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"