Dec 13 01:38:17.899653 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:38:17.899679 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:38:17.899690 kernel: BIOS-provided physical RAM map: Dec 13 01:38:17.899696 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:38:17.899702 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 01:38:17.899708 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 01:38:17.899716 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 01:38:17.899722 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 01:38:17.899728 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 01:38:17.899734 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 01:38:17.899742 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Dec 13 01:38:17.899749 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Dec 13 01:38:17.899755 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Dec 13 01:38:17.899761 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Dec 13 01:38:17.899769 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 01:38:17.899775 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 01:38:17.899784 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 01:38:17.899791 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 01:38:17.899798 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 01:38:17.899804 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:38:17.899811 kernel: NX (Execute Disable) protection: active Dec 13 01:38:17.899817 kernel: APIC: Static calls initialized Dec 13 01:38:17.899824 kernel: efi: EFI v2.7 by EDK II Dec 13 01:38:17.899831 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Dec 13 01:38:17.899838 kernel: SMBIOS 2.8 present. Dec 13 01:38:17.899844 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Dec 13 01:38:17.899851 kernel: Hypervisor detected: KVM Dec 13 01:38:17.899860 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:38:17.899866 kernel: kvm-clock: using sched offset of 4663165765 cycles Dec 13 01:38:17.899874 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:38:17.899881 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:38:17.899888 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:38:17.899895 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:38:17.899902 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Dec 13 01:38:17.899909 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 01:38:17.899916 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:38:17.899925 kernel: Using GB pages for direct mapping Dec 13 01:38:17.899932 kernel: Secure boot disabled Dec 13 01:38:17.899939 kernel: ACPI: Early table checksum verification disabled Dec 13 01:38:17.899946 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 13 01:38:17.899956 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:38:17.899963 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:38:17.899970 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:38:17.899995 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 13 01:38:17.900006 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:38:17.900015 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:38:17.900024 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:38:17.900034 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:38:17.900044 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 01:38:17.900053 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 13 01:38:17.900066 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Dec 13 01:38:17.900076 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 13 01:38:17.900107 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 13 01:38:17.900114 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 13 01:38:17.900121 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 13 01:38:17.900129 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 13 01:38:17.900135 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 13 01:38:17.900143 kernel: No NUMA configuration found Dec 13 01:38:17.900150 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Dec 13 01:38:17.900168 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Dec 13 01:38:17.900175 kernel: Zone ranges: Dec 13 01:38:17.900183 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:38:17.900190 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Dec 13 01:38:17.900198 kernel: Normal empty Dec 13 01:38:17.900205 kernel: Movable zone start for each node Dec 13 01:38:17.900212 kernel: Early memory node ranges Dec 13 01:38:17.900219 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 01:38:17.900226 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 13 01:38:17.900233 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 13 01:38:17.900243 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Dec 13 01:38:17.900250 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Dec 13 01:38:17.900257 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Dec 13 01:38:17.900265 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Dec 13 01:38:17.900272 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:38:17.900279 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 01:38:17.900286 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 13 01:38:17.900293 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:38:17.900300 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Dec 13 01:38:17.900309 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 13 01:38:17.900317 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Dec 13 01:38:17.900324 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:38:17.900331 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:38:17.900338 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:38:17.900345 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:38:17.900352 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:38:17.900359 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:38:17.900367 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:38:17.900376 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:38:17.900383 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:38:17.900390 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:38:17.900398 kernel: TSC deadline timer available Dec 13 01:38:17.900405 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:38:17.900412 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:38:17.900419 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:38:17.900426 kernel: kvm-guest: setup PV sched yield Dec 13 01:38:17.900434 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:38:17.900442 kernel: Booting paravirtualized kernel on KVM Dec 13 01:38:17.900453 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:38:17.900460 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:38:17.900468 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 01:38:17.900475 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 01:38:17.900482 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:38:17.900489 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:38:17.900496 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:38:17.900505 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:38:17.900518 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:38:17.900526 kernel: random: crng init done Dec 13 01:38:17.900534 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:38:17.900541 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:38:17.900548 kernel: Fallback order for Node 0: 0 Dec 13 01:38:17.900555 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Dec 13 01:38:17.900562 kernel: Policy zone: DMA32 Dec 13 01:38:17.900570 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:38:17.900577 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 171124K reserved, 0K cma-reserved) Dec 13 01:38:17.900587 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:38:17.900596 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:38:17.900605 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:38:17.900612 kernel: Dynamic Preempt: voluntary Dec 13 01:38:17.900627 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:38:17.900638 kernel: rcu: RCU event tracing is enabled. Dec 13 01:38:17.900646 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:38:17.900654 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:38:17.900662 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:38:17.900671 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:38:17.900680 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:38:17.900688 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:38:17.900698 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:38:17.900705 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:38:17.900713 kernel: Console: colour dummy device 80x25 Dec 13 01:38:17.900720 kernel: printk: console [ttyS0] enabled Dec 13 01:38:17.900728 kernel: ACPI: Core revision 20230628 Dec 13 01:38:17.900738 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:38:17.900745 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:38:17.900753 kernel: x2apic enabled Dec 13 01:38:17.900760 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:38:17.900768 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 01:38:17.900776 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 01:38:17.900783 kernel: kvm-guest: setup PV IPIs Dec 13 01:38:17.900791 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:38:17.900798 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:38:17.900808 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:38:17.900816 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:38:17.900823 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:38:17.900830 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:38:17.900838 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:38:17.900846 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:38:17.900853 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:38:17.900861 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:38:17.900868 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:38:17.900878 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:38:17.900885 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:38:17.900893 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:38:17.900901 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:38:17.900908 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:38:17.900916 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:38:17.900924 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:38:17.900931 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:38:17.900941 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:38:17.900949 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:38:17.900956 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:38:17.900964 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:38:17.900971 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:38:17.900979 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:38:17.900986 kernel: landlock: Up and running. Dec 13 01:38:17.900994 kernel: SELinux: Initializing. Dec 13 01:38:17.901001 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:38:17.901011 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:38:17.901019 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:38:17.901027 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:38:17.901034 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:38:17.901042 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:38:17.901049 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:38:17.901057 kernel: ... version: 0 Dec 13 01:38:17.901064 kernel: ... bit width: 48 Dec 13 01:38:17.901071 kernel: ... generic registers: 6 Dec 13 01:38:17.901081 kernel: ... value mask: 0000ffffffffffff Dec 13 01:38:17.901111 kernel: ... max period: 00007fffffffffff Dec 13 01:38:17.901121 kernel: ... fixed-purpose events: 0 Dec 13 01:38:17.901132 kernel: ... event mask: 000000000000003f Dec 13 01:38:17.901142 kernel: signal: max sigframe size: 1776 Dec 13 01:38:17.901151 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:38:17.901172 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:38:17.901181 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:38:17.901191 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:38:17.901205 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 01:38:17.901214 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:38:17.901224 kernel: smpboot: Max logical packages: 1 Dec 13 01:38:17.901235 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:38:17.901242 kernel: devtmpfs: initialized Dec 13 01:38:17.901250 kernel: x86/mm: Memory block size: 128MB Dec 13 01:38:17.901258 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 13 01:38:17.901266 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 13 01:38:17.901273 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Dec 13 01:38:17.901283 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 13 01:38:17.901291 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 13 01:38:17.901299 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:38:17.901306 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:38:17.901314 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:38:17.901321 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:38:17.901329 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:38:17.901336 kernel: audit: type=2000 audit(1734053897.412:1): state=initialized audit_enabled=0 res=1 Dec 13 01:38:17.901344 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:38:17.901354 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:38:17.901361 kernel: cpuidle: using governor menu Dec 13 01:38:17.901369 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:38:17.901376 kernel: dca service started, version 1.12.1 Dec 13 01:38:17.901384 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:38:17.901391 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 01:38:17.901399 kernel: PCI: Using configuration type 1 for base access Dec 13 01:38:17.901407 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:38:17.901414 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:38:17.901425 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:38:17.901432 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:38:17.901440 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:38:17.901447 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:38:17.901454 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:38:17.901462 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:38:17.901469 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:38:17.901477 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:38:17.901484 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:38:17.901494 kernel: ACPI: Interpreter enabled Dec 13 01:38:17.901502 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:38:17.901509 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:38:17.901517 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:38:17.901524 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:38:17.901532 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:38:17.901539 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:38:17.901716 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:38:17.901847 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:38:17.901967 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:38:17.901981 kernel: PCI host bridge to bus 0000:00 Dec 13 01:38:17.902121 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:38:17.902244 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:38:17.902355 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:38:17.902470 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:38:17.902586 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:38:17.902694 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Dec 13 01:38:17.902804 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:38:17.902941 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:38:17.903069 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:38:17.903220 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Dec 13 01:38:17.903407 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Dec 13 01:38:17.903544 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Dec 13 01:38:17.903665 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Dec 13 01:38:17.903785 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:38:17.903919 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:38:17.904040 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Dec 13 01:38:17.904190 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Dec 13 01:38:17.904315 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Dec 13 01:38:17.904442 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:38:17.904586 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Dec 13 01:38:17.904719 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Dec 13 01:38:17.904839 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Dec 13 01:38:17.904966 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:38:17.905105 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Dec 13 01:38:17.905242 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Dec 13 01:38:17.905363 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Dec 13 01:38:17.905527 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Dec 13 01:38:17.905658 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:38:17.905777 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:38:17.905905 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:38:17.906030 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Dec 13 01:38:17.906170 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Dec 13 01:38:17.906298 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:38:17.906423 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Dec 13 01:38:17.906434 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:38:17.906441 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:38:17.906449 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:38:17.906456 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:38:17.906468 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:38:17.906475 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:38:17.906483 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:38:17.906490 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:38:17.906498 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:38:17.906505 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:38:17.906513 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:38:17.906520 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:38:17.906528 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:38:17.906538 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:38:17.906545 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:38:17.906553 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:38:17.906560 kernel: iommu: Default domain type: Translated Dec 13 01:38:17.906568 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:38:17.906575 kernel: efivars: Registered efivars operations Dec 13 01:38:17.906582 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:38:17.906590 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:38:17.906598 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 13 01:38:17.906607 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Dec 13 01:38:17.906615 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Dec 13 01:38:17.906622 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Dec 13 01:38:17.906780 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:38:17.906946 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:38:17.907110 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:38:17.907122 kernel: vgaarb: loaded Dec 13 01:38:17.907130 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:38:17.907138 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:38:17.907149 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:38:17.907165 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:38:17.907173 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:38:17.907180 kernel: pnp: PnP ACPI init Dec 13 01:38:17.907311 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:38:17.907322 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:38:17.907330 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:38:17.907338 kernel: NET: Registered PF_INET protocol family Dec 13 01:38:17.907349 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:38:17.907357 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:38:17.907365 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:38:17.907372 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:38:17.907380 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:38:17.907387 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:38:17.907395 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:38:17.907403 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:38:17.907410 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:38:17.907420 kernel: NET: Registered PF_XDP protocol family Dec 13 01:38:17.907568 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Dec 13 01:38:17.907708 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Dec 13 01:38:17.907820 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:38:17.907930 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:38:17.908040 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:38:17.908171 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:38:17.908292 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:38:17.908422 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Dec 13 01:38:17.908435 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:38:17.908444 kernel: Initialise system trusted keyrings Dec 13 01:38:17.908454 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:38:17.908463 kernel: Key type asymmetric registered Dec 13 01:38:17.908473 kernel: Asymmetric key parser 'x509' registered Dec 13 01:38:17.908482 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:38:17.908491 kernel: io scheduler mq-deadline registered Dec 13 01:38:17.908504 kernel: io scheduler kyber registered Dec 13 01:38:17.908512 kernel: io scheduler bfq registered Dec 13 01:38:17.908520 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:38:17.908528 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:38:17.908535 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:38:17.908543 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:38:17.908550 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:38:17.908558 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:38:17.908566 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:38:17.908573 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:38:17.908583 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:38:17.908712 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:38:17.908723 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:38:17.908836 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:38:17.908948 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:38:17 UTC (1734053897) Dec 13 01:38:17.909062 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:38:17.909072 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:38:17.909095 kernel: efifb: probing for efifb Dec 13 01:38:17.909104 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Dec 13 01:38:17.909112 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Dec 13 01:38:17.909120 kernel: efifb: scrolling: redraw Dec 13 01:38:17.909128 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Dec 13 01:38:17.909136 kernel: Console: switching to colour frame buffer device 100x37 Dec 13 01:38:17.909172 kernel: fb0: EFI VGA frame buffer device Dec 13 01:38:17.909182 kernel: pstore: Using crash dump compression: deflate Dec 13 01:38:17.909190 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:38:17.909200 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:38:17.909208 kernel: Segment Routing with IPv6 Dec 13 01:38:17.909216 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:38:17.909224 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:38:17.909232 kernel: Key type dns_resolver registered Dec 13 01:38:17.909240 kernel: IPI shorthand broadcast: enabled Dec 13 01:38:17.909248 kernel: sched_clock: Marking stable (588003820, 119529067)->(753332196, -45799309) Dec 13 01:38:17.909256 kernel: registered taskstats version 1 Dec 13 01:38:17.909264 kernel: Loading compiled-in X.509 certificates Dec 13 01:38:17.909272 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:38:17.909282 kernel: Key type .fscrypt registered Dec 13 01:38:17.909290 kernel: Key type fscrypt-provisioning registered Dec 13 01:38:17.909298 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:38:17.909306 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:38:17.909314 kernel: ima: No architecture policies found Dec 13 01:38:17.909322 kernel: clk: Disabling unused clocks Dec 13 01:38:17.909330 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:38:17.909337 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:38:17.909348 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:38:17.909356 kernel: Run /init as init process Dec 13 01:38:17.909363 kernel: with arguments: Dec 13 01:38:17.909371 kernel: /init Dec 13 01:38:17.909379 kernel: with environment: Dec 13 01:38:17.909386 kernel: HOME=/ Dec 13 01:38:17.909397 kernel: TERM=linux Dec 13 01:38:17.909405 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:38:17.909415 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:38:17.909429 systemd[1]: Detected virtualization kvm. Dec 13 01:38:17.909441 systemd[1]: Detected architecture x86-64. Dec 13 01:38:17.909452 systemd[1]: Running in initrd. Dec 13 01:38:17.909467 systemd[1]: No hostname configured, using default hostname. Dec 13 01:38:17.909479 systemd[1]: Hostname set to . Dec 13 01:38:17.909488 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:38:17.909496 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:38:17.909504 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:38:17.909513 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:38:17.909522 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:38:17.909530 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:38:17.909538 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:38:17.909550 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:38:17.909560 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:38:17.909569 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:38:17.909577 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:38:17.909586 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:38:17.909594 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:38:17.909602 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:38:17.909613 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:38:17.909621 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:38:17.909629 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:38:17.909638 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:38:17.909646 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:38:17.909655 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:38:17.909663 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:38:17.909672 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:38:17.909682 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:38:17.909691 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:38:17.909699 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:38:17.909707 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:38:17.909715 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:38:17.909724 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:38:17.909732 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:38:17.909740 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:38:17.909748 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:38:17.909759 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:38:17.909767 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:38:17.909776 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:38:17.909785 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:38:17.909819 systemd-journald[192]: Collecting audit messages is disabled. Dec 13 01:38:17.909858 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:38:17.909879 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:38:17.909888 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:38:17.909900 systemd-journald[192]: Journal started Dec 13 01:38:17.909918 systemd-journald[192]: Runtime Journal (/run/log/journal/e7fbf6330cc14ea0ae7740b31cbb5f5b) is 6.0M, max 48.3M, 42.2M free. Dec 13 01:38:17.908221 systemd-modules-load[194]: Inserted module 'overlay' Dec 13 01:38:17.912389 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:38:17.916105 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:38:17.919024 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:38:17.922279 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:38:17.935364 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:38:17.947106 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:38:17.950700 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 13 01:38:17.951758 kernel: Bridge firewalling registered Dec 13 01:38:17.955288 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:38:17.956535 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:38:17.957962 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:38:17.961894 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:38:17.969981 dracut-cmdline[221]: dracut-dracut-053 Dec 13 01:38:17.973003 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:38:17.978656 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:38:17.986318 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:38:18.027227 systemd-resolved[246]: Positive Trust Anchors: Dec 13 01:38:18.027246 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:38:18.027291 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:38:18.030623 systemd-resolved[246]: Defaulting to hostname 'linux'. Dec 13 01:38:18.031851 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:38:18.040002 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:38:18.088119 kernel: SCSI subsystem initialized Dec 13 01:38:18.101111 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:38:18.115107 kernel: iscsi: registered transport (tcp) Dec 13 01:38:18.141333 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:38:18.141412 kernel: QLogic iSCSI HBA Driver Dec 13 01:38:18.235290 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:38:18.250438 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:38:18.276034 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:38:18.276187 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:38:18.276207 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:38:18.319153 kernel: raid6: avx2x4 gen() 29742 MB/s Dec 13 01:38:18.336155 kernel: raid6: avx2x2 gen() 30657 MB/s Dec 13 01:38:18.353255 kernel: raid6: avx2x1 gen() 25315 MB/s Dec 13 01:38:18.353336 kernel: raid6: using algorithm avx2x2 gen() 30657 MB/s Dec 13 01:38:18.371316 kernel: raid6: .... xor() 19286 MB/s, rmw enabled Dec 13 01:38:18.371411 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:38:18.394120 kernel: xor: automatically using best checksumming function avx Dec 13 01:38:18.561122 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:38:18.576618 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:38:18.584286 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:38:18.596866 systemd-udevd[412]: Using default interface naming scheme 'v255'. Dec 13 01:38:18.601522 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:38:18.613247 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:38:18.627416 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Dec 13 01:38:18.662825 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:38:18.686368 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:38:18.750368 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:38:18.757260 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:38:18.773194 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:38:18.774167 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:38:18.778454 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:38:18.778725 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:38:18.787103 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 01:38:18.815125 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:38:18.815321 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:38:18.815338 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:38:18.815352 kernel: GPT:9289727 != 19775487 Dec 13 01:38:18.815366 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:38:18.815379 kernel: GPT:9289727 != 19775487 Dec 13 01:38:18.815392 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:38:18.815406 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:38:18.793339 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:38:18.814994 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:38:18.821396 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:38:18.821430 kernel: AES CTR mode by8 optimization enabled Dec 13 01:38:18.821747 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:38:18.821869 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:38:18.823693 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:38:18.827851 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:38:18.827991 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:38:18.840074 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (465) Dec 13 01:38:18.836618 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:38:18.844118 kernel: libata version 3.00 loaded. Dec 13 01:38:18.846433 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:38:18.848018 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (457) Dec 13 01:38:18.852118 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:38:18.875104 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:38:18.875125 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:38:18.875290 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:38:18.875436 kernel: scsi host0: ahci Dec 13 01:38:18.875641 kernel: scsi host1: ahci Dec 13 01:38:18.875784 kernel: scsi host2: ahci Dec 13 01:38:18.875926 kernel: scsi host3: ahci Dec 13 01:38:18.876076 kernel: scsi host4: ahci Dec 13 01:38:18.877065 kernel: scsi host5: ahci Dec 13 01:38:18.877333 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Dec 13 01:38:18.877353 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Dec 13 01:38:18.877365 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Dec 13 01:38:18.877375 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Dec 13 01:38:18.877386 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Dec 13 01:38:18.877396 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Dec 13 01:38:18.868553 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:38:18.876069 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:38:18.894592 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:38:18.901847 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:38:18.904835 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:38:18.911547 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:38:18.930301 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:38:18.931703 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:38:18.931773 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:38:18.933202 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:38:18.942646 disk-uuid[553]: Primary Header is updated. Dec 13 01:38:18.942646 disk-uuid[553]: Secondary Entries is updated. Dec 13 01:38:18.942646 disk-uuid[553]: Secondary Header is updated. Dec 13 01:38:18.945579 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:38:18.935441 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:38:18.948112 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:38:18.954804 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:38:18.967397 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:38:18.993324 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:38:19.184835 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:38:19.184929 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:38:19.184948 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:38:19.184963 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:38:19.186135 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:38:19.187140 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:38:19.188147 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:38:19.188174 kernel: ata3.00: applying bridge limits Dec 13 01:38:19.189326 kernel: ata3.00: configured for UDMA/100 Dec 13 01:38:19.190143 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:38:19.241296 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:38:19.267130 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:38:19.267150 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:38:19.953128 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:38:19.953646 disk-uuid[555]: The operation has completed successfully. Dec 13 01:38:19.984651 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:38:19.984814 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:38:20.017340 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:38:20.020543 sh[598]: Success Dec 13 01:38:20.033118 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:38:20.066245 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:38:20.087448 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:38:20.090876 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:38:20.102789 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:38:20.102824 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:38:20.102835 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:38:20.103836 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:38:20.104609 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:38:20.109114 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:38:20.109721 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:38:20.114204 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:38:20.116181 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:38:20.127181 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:38:20.127233 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:38:20.127245 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:38:20.131139 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:38:20.142966 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:38:20.146113 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:38:20.196649 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:38:20.203285 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:38:20.230299 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:38:20.244340 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:38:20.254450 ignition[750]: Ignition 2.19.0 Dec 13 01:38:20.254462 ignition[750]: Stage: fetch-offline Dec 13 01:38:20.254496 ignition[750]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:38:20.254507 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:38:20.254606 ignition[750]: parsed url from cmdline: "" Dec 13 01:38:20.254609 ignition[750]: no config URL provided Dec 13 01:38:20.254615 ignition[750]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:38:20.254623 ignition[750]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:38:20.254650 ignition[750]: op(1): [started] loading QEMU firmware config module Dec 13 01:38:20.254657 ignition[750]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:38:20.267622 systemd-networkd[779]: lo: Link UP Dec 13 01:38:20.267633 systemd-networkd[779]: lo: Gained carrier Dec 13 01:38:20.269229 systemd-networkd[779]: Enumeration completed Dec 13 01:38:20.269405 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:38:20.269598 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:38:20.269602 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:38:20.270409 systemd-networkd[779]: eth0: Link UP Dec 13 01:38:20.270412 systemd-networkd[779]: eth0: Gained carrier Dec 13 01:38:20.270419 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:38:20.271700 systemd[1]: Reached target network.target - Network. Dec 13 01:38:20.281038 ignition[750]: op(1): [finished] loading QEMU firmware config module Dec 13 01:38:20.283278 ignition[750]: parsing config with SHA512: bc6658487d584b4127e757b62c96bd872f6cb82bac8943e43874900bc0c9619588a7ec50f74a4cb300d28354bc9a3e42071560b1705f7e12c58d0306ae5149da Dec 13 01:38:20.286010 unknown[750]: fetched base config from "system" Dec 13 01:38:20.286021 unknown[750]: fetched user config from "qemu" Dec 13 01:38:20.286313 ignition[750]: fetch-offline: fetch-offline passed Dec 13 01:38:20.286386 ignition[750]: Ignition finished successfully Dec 13 01:38:20.288138 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:38:20.293003 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:38:20.293301 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:38:20.298386 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:38:20.315403 ignition[789]: Ignition 2.19.0 Dec 13 01:38:20.315413 ignition[789]: Stage: kargs Dec 13 01:38:20.315609 ignition[789]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:38:20.315623 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:38:20.422075 ignition[789]: kargs: kargs passed Dec 13 01:38:20.422170 ignition[789]: Ignition finished successfully Dec 13 01:38:20.425522 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:38:20.438253 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:38:20.459722 ignition[796]: Ignition 2.19.0 Dec 13 01:38:20.459733 ignition[796]: Stage: disks Dec 13 01:38:20.459890 ignition[796]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:38:20.459901 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:38:20.460543 ignition[796]: disks: disks passed Dec 13 01:38:20.460581 ignition[796]: Ignition finished successfully Dec 13 01:38:20.466512 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:38:20.466784 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:38:20.470779 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:38:20.470864 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:38:20.473344 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:38:20.473692 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:38:20.489312 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:38:20.502631 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:38:20.509541 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:38:20.519220 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:38:20.607107 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:38:20.607125 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:38:20.607814 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:38:20.618164 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:38:20.620033 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:38:20.620345 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:38:20.626545 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) Dec 13 01:38:20.620381 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:38:20.632269 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:38:20.632285 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:38:20.632295 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:38:20.632306 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:38:20.620400 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:38:20.634062 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:38:20.643412 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:38:20.644306 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:38:20.683935 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:38:20.689239 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:38:20.694273 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:38:20.699196 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:38:20.791958 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:38:20.803258 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:38:20.807110 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:38:20.815116 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:38:20.835344 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:38:20.838290 ignition[929]: INFO : Ignition 2.19.0 Dec 13 01:38:20.838290 ignition[929]: INFO : Stage: mount Dec 13 01:38:20.838290 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:38:20.838290 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:38:20.842719 ignition[929]: INFO : mount: mount passed Dec 13 01:38:20.842719 ignition[929]: INFO : Ignition finished successfully Dec 13 01:38:20.845983 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:38:20.852191 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:38:21.102138 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:38:21.115325 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:38:21.123123 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (942) Dec 13 01:38:21.125311 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:38:21.125340 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:38:21.125356 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:38:21.129127 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:38:21.130109 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:38:21.158727 ignition[959]: INFO : Ignition 2.19.0 Dec 13 01:38:21.158727 ignition[959]: INFO : Stage: files Dec 13 01:38:21.161529 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:38:21.161529 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:38:21.161529 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:38:21.161529 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:38:21.161529 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:38:21.169981 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:38:21.169981 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:38:21.169981 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:38:21.169981 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:38:21.169981 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:38:21.169981 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:38:21.169981 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:38:21.169981 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:38:21.169981 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:38:21.169981 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:38:21.169981 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 01:38:21.163427 unknown[959]: wrote ssh authorized keys file for user: core Dec 13 01:38:21.525145 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 01:38:21.864267 systemd-networkd[779]: eth0: Gained IPv6LL Dec 13 01:38:21.882471 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:38:21.882471 ignition[959]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Dec 13 01:38:21.887113 ignition[959]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:38:21.887113 ignition[959]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:38:21.887113 ignition[959]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Dec 13 01:38:21.887113 ignition[959]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:38:21.912168 ignition[959]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:38:21.918555 ignition[959]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:38:21.920835 ignition[959]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:38:21.920835 ignition[959]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:38:21.920835 ignition[959]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:38:21.920835 ignition[959]: INFO : files: files passed Dec 13 01:38:21.920835 ignition[959]: INFO : Ignition finished successfully Dec 13 01:38:21.931798 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:38:21.946395 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:38:21.949545 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:38:21.951509 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:38:21.951666 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:38:21.968824 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:38:21.973860 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:38:21.973860 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:38:21.977729 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:38:21.980964 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:38:21.982782 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:38:21.992399 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:38:22.019954 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:38:22.020158 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:38:22.021541 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:38:22.025543 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:38:22.025851 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:38:22.031251 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:38:22.049600 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:38:22.057326 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:38:22.068688 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:38:22.068917 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:38:22.071196 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:38:22.074232 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:38:22.074367 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:38:22.078313 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:38:22.078449 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:38:22.078800 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:38:22.079149 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:38:22.079640 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:38:22.079987 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:38:22.080497 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:38:22.080856 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:38:22.081528 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:38:22.081869 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:38:22.082386 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:38:22.082501 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:38:22.100852 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:38:22.101048 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:38:22.101567 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:38:22.105543 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:38:22.105874 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:38:22.106120 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:38:22.110958 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:38:22.111192 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:38:22.115238 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:38:22.116324 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:38:22.121206 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:38:22.124388 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:38:22.124540 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:38:22.127429 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:38:22.127534 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:38:22.130303 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:38:22.130454 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:38:22.131352 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:38:22.131537 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:38:22.133350 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:38:22.133515 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:38:22.146347 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:38:22.149301 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:38:22.151124 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:38:22.151321 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:38:22.152512 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:38:22.152661 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:38:22.160472 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:38:22.161708 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:38:22.166118 ignition[1014]: INFO : Ignition 2.19.0 Dec 13 01:38:22.166118 ignition[1014]: INFO : Stage: umount Dec 13 01:38:22.168177 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:38:22.168177 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:38:22.168177 ignition[1014]: INFO : umount: umount passed Dec 13 01:38:22.168177 ignition[1014]: INFO : Ignition finished successfully Dec 13 01:38:22.169882 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:38:22.170071 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:38:22.172200 systemd[1]: Stopped target network.target - Network. Dec 13 01:38:22.173792 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:38:22.173846 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:38:22.175885 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:38:22.175935 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:38:22.178108 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:38:22.178157 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:38:22.180303 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:38:22.180350 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:38:22.182518 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:38:22.185226 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:38:22.188657 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:38:22.190128 systemd-networkd[779]: eth0: DHCPv6 lease lost Dec 13 01:38:22.192688 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:38:22.192891 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:38:22.195712 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:38:22.195879 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:38:22.200075 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:38:22.200164 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:38:22.209270 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:38:22.209343 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:38:22.209404 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:38:22.209764 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:38:22.209810 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:38:22.210145 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:38:22.210194 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:38:22.210476 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:38:22.210517 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:38:22.210929 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:38:22.218885 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:38:22.218997 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:38:22.225404 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:38:22.225654 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:38:22.227464 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:38:22.227529 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:38:22.229497 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:38:22.229564 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:38:22.231973 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:38:22.232060 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:38:22.235008 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:38:22.235111 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:38:22.236786 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:38:22.236847 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:38:22.250217 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:38:22.278439 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:38:22.278496 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:38:22.280803 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:38:22.280850 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:38:22.283197 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:38:22.283247 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:38:22.285632 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:38:22.285679 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:38:22.288541 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:38:22.288643 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:38:22.368779 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:38:22.368934 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:38:22.371215 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:38:22.373185 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:38:22.373242 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:38:22.380215 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:38:22.387933 systemd[1]: Switching root. Dec 13 01:38:22.422970 systemd-journald[192]: Journal stopped Dec 13 01:38:23.467910 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Dec 13 01:38:23.468035 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:38:23.468055 kernel: SELinux: policy capability open_perms=1 Dec 13 01:38:23.468071 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:38:23.468430 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:38:23.468453 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:38:23.468472 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:38:23.468488 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:38:23.468504 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:38:23.468527 kernel: audit: type=1403 audit(1734053902.722:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:38:23.468552 systemd[1]: Successfully loaded SELinux policy in 41.119ms. Dec 13 01:38:23.468579 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.899ms. Dec 13 01:38:23.468599 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:38:23.470249 systemd[1]: Detected virtualization kvm. Dec 13 01:38:23.470299 systemd[1]: Detected architecture x86-64. Dec 13 01:38:23.470330 systemd[1]: Detected first boot. Dec 13 01:38:23.470348 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:38:23.470363 zram_generator::config[1058]: No configuration found. Dec 13 01:38:23.470380 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:38:23.470396 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:38:23.470408 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:38:23.470421 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:38:23.470433 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:38:23.470449 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:38:23.470461 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:38:23.470474 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:38:23.470486 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:38:23.470500 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:38:23.470521 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:38:23.470542 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:38:23.470554 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:38:23.470567 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:38:23.470584 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:38:23.470596 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:38:23.470609 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:38:23.470621 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:38:23.470633 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:38:23.470646 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:38:23.470657 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:38:23.470669 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:38:23.470681 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:38:23.470697 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:38:23.470708 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:38:23.470720 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:38:23.470738 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:38:23.470750 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:38:23.470762 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:38:23.470774 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:38:23.470790 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:38:23.470802 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:38:23.470813 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:38:23.470825 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:38:23.470837 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:38:23.470849 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:38:23.470861 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:38:23.470873 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:38:23.470884 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:38:23.470906 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:38:23.470919 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:38:23.470933 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:38:23.470951 systemd[1]: Reached target machines.target - Containers. Dec 13 01:38:23.470967 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:38:23.470982 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:38:23.471005 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:38:23.471021 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:38:23.471037 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:38:23.471062 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:38:23.471074 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:38:23.471115 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:38:23.471130 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:38:23.471143 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:38:23.471155 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:38:23.471169 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:38:23.471182 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:38:23.471206 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:38:23.471219 kernel: loop: module loaded Dec 13 01:38:23.471232 kernel: fuse: init (API version 7.39) Dec 13 01:38:23.471250 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:38:23.471262 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:38:23.471274 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:38:23.471285 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:38:23.471297 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:38:23.471347 systemd-journald[1121]: Collecting audit messages is disabled. Dec 13 01:38:23.471373 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:38:23.471385 kernel: ACPI: bus type drm_connector registered Dec 13 01:38:23.471397 systemd[1]: Stopped verity-setup.service. Dec 13 01:38:23.471410 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:38:23.471423 systemd-journald[1121]: Journal started Dec 13 01:38:23.471448 systemd-journald[1121]: Runtime Journal (/run/log/journal/e7fbf6330cc14ea0ae7740b31cbb5f5b) is 6.0M, max 48.3M, 42.2M free. Dec 13 01:38:23.237679 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:38:23.254803 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:38:23.255366 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:38:23.477769 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:38:23.478301 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:38:23.479730 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:38:23.481030 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:38:23.482271 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:38:23.483530 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:38:23.484854 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:38:23.486369 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:38:23.488104 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:38:23.488286 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:38:23.490121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:38:23.490343 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:38:23.492234 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:38:23.492412 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:38:23.494101 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:38:23.494279 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:38:23.495961 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:38:23.496193 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:38:23.497760 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:38:23.499289 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:38:23.499455 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:38:23.500935 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:38:23.502516 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:38:23.504233 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:38:23.519943 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:38:23.531362 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:38:23.533978 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:38:23.535244 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:38:23.535278 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:38:23.537622 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:38:23.540271 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:38:23.543101 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:38:23.544492 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:38:23.547322 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:38:23.550982 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:38:23.552348 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:38:23.555956 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:38:23.557610 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:38:23.563378 systemd-journald[1121]: Time spent on flushing to /var/log/journal/e7fbf6330cc14ea0ae7740b31cbb5f5b is 20.543ms for 978 entries. Dec 13 01:38:23.563378 systemd-journald[1121]: System Journal (/var/log/journal/e7fbf6330cc14ea0ae7740b31cbb5f5b) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:38:23.611435 systemd-journald[1121]: Received client request to flush runtime journal. Dec 13 01:38:23.611494 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 01:38:23.561427 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:38:23.568169 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:38:23.571296 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:38:23.576585 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:38:23.579474 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:38:23.581333 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:38:23.583026 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:38:23.585947 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:38:23.593317 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:38:23.603280 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:38:23.608232 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:38:23.615076 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:38:23.622887 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:38:23.626350 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:38:23.628829 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Dec 13 01:38:23.629232 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Dec 13 01:38:23.629245 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:38:23.633316 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:38:23.634061 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:38:23.637575 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:38:23.647118 kernel: loop1: detected capacity change from 0 to 140768 Dec 13 01:38:23.647125 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:38:23.674563 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:38:23.679180 kernel: loop2: detected capacity change from 0 to 210664 Dec 13 01:38:23.681343 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:38:23.705790 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Dec 13 01:38:23.705822 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Dec 13 01:38:23.714038 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:38:23.719130 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 01:38:23.730114 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 01:38:23.741130 kernel: loop5: detected capacity change from 0 to 210664 Dec 13 01:38:23.748844 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:38:23.749462 (sd-merge)[1201]: Merged extensions into '/usr'. Dec 13 01:38:23.753800 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:38:23.753821 systemd[1]: Reloading... Dec 13 01:38:23.815172 zram_generator::config[1225]: No configuration found. Dec 13 01:38:23.921504 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:38:23.976447 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:38:24.032343 systemd[1]: Reloading finished in 277 ms. Dec 13 01:38:24.070322 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:38:24.072173 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:38:24.089292 systemd[1]: Starting ensure-sysext.service... Dec 13 01:38:24.091379 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:38:24.100401 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:38:24.100417 systemd[1]: Reloading... Dec 13 01:38:24.119238 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:38:24.119621 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:38:24.120654 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:38:24.120974 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Dec 13 01:38:24.121065 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Dec 13 01:38:24.154113 zram_generator::config[1292]: No configuration found. Dec 13 01:38:24.154836 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:38:24.154854 systemd-tmpfiles[1265]: Skipping /boot Dec 13 01:38:24.167966 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:38:24.167992 systemd-tmpfiles[1265]: Skipping /boot Dec 13 01:38:24.273621 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:38:24.323627 systemd[1]: Reloading finished in 222 ms. Dec 13 01:38:24.342153 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:38:24.354838 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:38:24.379642 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:38:24.383044 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:38:24.385794 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:38:24.390414 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:38:24.394298 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:38:24.397176 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:38:24.400987 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:38:24.401502 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:38:24.403717 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:38:24.410453 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:38:24.419250 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:38:24.420665 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:38:24.427316 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:38:24.428606 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:38:24.429728 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:38:24.429950 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:38:24.431950 systemd-udevd[1342]: Using default interface naming scheme 'v255'. Dec 13 01:38:24.432232 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:38:24.432401 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:38:24.434431 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:38:24.434594 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:38:24.440803 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:38:24.441756 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:38:24.445443 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:38:24.449502 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:38:24.450125 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:38:24.452005 augenrules[1359]: No rules Dec 13 01:38:24.456369 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:38:24.461377 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:38:24.468179 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:38:24.469903 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:38:24.470066 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:38:24.471051 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:38:24.473182 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:38:24.481490 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:38:24.484379 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:38:24.487911 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:38:24.490785 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:38:24.491027 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:38:24.493993 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:38:24.494955 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:38:24.497353 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:38:24.498180 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:38:24.528130 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1381) Dec 13 01:38:24.528203 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1381) Dec 13 01:38:24.523550 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:38:24.524037 systemd[1]: Finished ensure-sysext.service. Dec 13 01:38:24.530814 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:38:24.532149 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:38:24.539335 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:38:24.542298 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:38:24.547451 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:38:24.553288 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:38:24.555645 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:38:24.559303 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:38:24.565420 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:38:24.570243 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:38:24.574158 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:38:24.574197 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:38:24.575072 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:38:24.575343 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:38:24.577416 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:38:24.577644 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:38:24.585524 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:38:24.585765 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:38:24.586146 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:38:24.588246 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:38:24.588251 systemd-resolved[1341]: Positive Trust Anchors: Dec 13 01:38:24.588260 systemd-resolved[1341]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:38:24.588291 systemd-resolved[1341]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:38:24.588482 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:38:24.598146 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1378) Dec 13 01:38:24.598221 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:38:24.597011 systemd-resolved[1341]: Defaulting to hostname 'linux'. Dec 13 01:38:24.611455 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:38:24.616800 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:38:24.618528 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:38:24.622011 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:38:24.624814 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:38:24.624860 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:38:24.633522 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 01:38:24.639381 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:38:24.639620 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:38:24.641484 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:38:24.650167 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:38:24.635145 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:38:24.659730 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:38:24.704378 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:38:24.698375 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:38:24.704782 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:38:24.712546 systemd-networkd[1409]: lo: Link UP Dec 13 01:38:24.712784 systemd-networkd[1409]: lo: Gained carrier Dec 13 01:38:24.714876 systemd-networkd[1409]: Enumeration completed Dec 13 01:38:24.716785 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:38:24.717496 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:38:24.717502 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:38:24.718797 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:38:24.721103 systemd[1]: Reached target network.target - Network. Dec 13 01:38:24.722025 systemd-networkd[1409]: eth0: Link UP Dec 13 01:38:24.722154 systemd-networkd[1409]: eth0: Gained carrier Dec 13 01:38:24.722226 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:38:24.730278 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:38:24.735322 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:38:24.735632 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:38:24.748885 systemd-networkd[1409]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:38:24.749737 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Dec 13 01:38:25.274499 systemd-timesyncd[1410]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:38:25.274537 systemd-timesyncd[1410]: Initial clock synchronization to Fri 2024-12-13 01:38:25.274410 UTC. Dec 13 01:38:25.274915 systemd-resolved[1341]: Clock change detected. Flushing caches. Dec 13 01:38:25.277608 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:38:25.371858 kernel: kvm_amd: TSC scaling supported Dec 13 01:38:25.371941 kernel: kvm_amd: Nested Virtualization enabled Dec 13 01:38:25.371960 kernel: kvm_amd: Nested Paging enabled Dec 13 01:38:25.372526 kernel: kvm_amd: LBR virtualization supported Dec 13 01:38:25.374640 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 01:38:25.374679 kernel: kvm_amd: Virtual GIF supported Dec 13 01:38:25.399175 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:38:25.403176 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:38:25.433553 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:38:25.451509 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:38:25.460377 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:38:25.493288 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:38:25.494907 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:38:25.496074 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:38:25.497476 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:38:25.498814 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:38:25.500365 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:38:25.501574 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:38:25.502987 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:38:25.504321 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:38:25.504355 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:38:25.505284 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:38:25.507191 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:38:25.509962 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:38:25.518773 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:38:25.521051 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:38:25.522582 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:38:25.523728 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:38:25.524722 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:38:25.525681 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:38:25.525708 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:38:25.526716 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:38:25.528768 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:38:25.533189 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:38:25.533669 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:38:25.537451 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:38:25.540414 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:38:25.541806 jq[1446]: false Dec 13 01:38:25.542939 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:38:25.547991 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:38:25.549362 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:38:25.557329 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:38:25.558871 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:38:25.560458 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:38:25.561212 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:38:25.563272 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:38:25.563765 extend-filesystems[1447]: Found loop3 Dec 13 01:38:25.565944 extend-filesystems[1447]: Found loop4 Dec 13 01:38:25.565944 extend-filesystems[1447]: Found loop5 Dec 13 01:38:25.565944 extend-filesystems[1447]: Found sr0 Dec 13 01:38:25.565944 extend-filesystems[1447]: Found vda Dec 13 01:38:25.565944 extend-filesystems[1447]: Found vda1 Dec 13 01:38:25.565944 extend-filesystems[1447]: Found vda2 Dec 13 01:38:25.565944 extend-filesystems[1447]: Found vda3 Dec 13 01:38:25.565944 extend-filesystems[1447]: Found usr Dec 13 01:38:25.565944 extend-filesystems[1447]: Found vda4 Dec 13 01:38:25.565944 extend-filesystems[1447]: Found vda6 Dec 13 01:38:25.565944 extend-filesystems[1447]: Found vda7 Dec 13 01:38:25.565944 extend-filesystems[1447]: Found vda9 Dec 13 01:38:25.565944 extend-filesystems[1447]: Checking size of /dev/vda9 Dec 13 01:38:25.584276 extend-filesystems[1447]: Resized partition /dev/vda9 Dec 13 01:38:25.566593 dbus-daemon[1445]: [system] SELinux support is enabled Dec 13 01:38:25.570974 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:38:25.580082 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:38:25.586597 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:38:25.586813 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:38:25.587184 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:38:25.587385 extend-filesystems[1464]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:38:25.587388 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:38:25.592301 jq[1456]: true Dec 13 01:38:25.595598 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:38:25.598154 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1371) Dec 13 01:38:25.599776 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:38:25.599827 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:38:25.604974 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:38:25.605021 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:38:25.613253 update_engine[1455]: I20241213 01:38:25.609941 1455 main.cc:92] Flatcar Update Engine starting Dec 13 01:38:25.617645 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:38:25.617881 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:38:25.619096 update_engine[1455]: I20241213 01:38:25.618882 1455 update_check_scheduler.cc:74] Next update check in 6m23s Dec 13 01:38:25.623520 jq[1472]: true Dec 13 01:38:25.633200 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:38:25.639932 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:38:25.641760 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:38:25.643836 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:38:25.665221 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:38:25.665252 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:38:25.666965 extend-filesystems[1464]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:38:25.666965 extend-filesystems[1464]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:38:25.666965 extend-filesystems[1464]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:38:25.666893 systemd-logind[1452]: New seat seat0. Dec 13 01:38:25.671774 extend-filesystems[1447]: Resized filesystem in /dev/vda9 Dec 13 01:38:25.668301 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:38:25.668526 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:38:25.675305 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:38:25.688762 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:38:25.691377 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:38:25.693644 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:38:25.693671 locksmithd[1481]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:38:25.837303 containerd[1476]: time="2024-12-13T01:38:25.837222054Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:38:25.853357 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:38:25.860734 containerd[1476]: time="2024-12-13T01:38:25.860692419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:38:25.862282 containerd[1476]: time="2024-12-13T01:38:25.862247626Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:38:25.862282 containerd[1476]: time="2024-12-13T01:38:25.862271440Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:38:25.862395 containerd[1476]: time="2024-12-13T01:38:25.862285166Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:38:25.862478 containerd[1476]: time="2024-12-13T01:38:25.862456297Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:38:25.862478 containerd[1476]: time="2024-12-13T01:38:25.862475002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:38:25.862555 containerd[1476]: time="2024-12-13T01:38:25.862539824Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:38:25.862587 containerd[1476]: time="2024-12-13T01:38:25.862555072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:38:25.862783 containerd[1476]: time="2024-12-13T01:38:25.862756180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:38:25.862809 containerd[1476]: time="2024-12-13T01:38:25.862779583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:38:25.862809 containerd[1476]: time="2024-12-13T01:38:25.862796094Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:38:25.862809 containerd[1476]: time="2024-12-13T01:38:25.862805732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:38:25.862914 containerd[1476]: time="2024-12-13T01:38:25.862899829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:38:25.863191 containerd[1476]: time="2024-12-13T01:38:25.863171468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:38:25.863319 containerd[1476]: time="2024-12-13T01:38:25.863303376Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:38:25.863339 containerd[1476]: time="2024-12-13T01:38:25.863318043Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:38:25.863444 containerd[1476]: time="2024-12-13T01:38:25.863426316Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:38:25.863525 containerd[1476]: time="2024-12-13T01:38:25.863508991Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:38:25.868628 containerd[1476]: time="2024-12-13T01:38:25.868602744Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:38:25.868665 containerd[1476]: time="2024-12-13T01:38:25.868647768Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:38:25.868684 containerd[1476]: time="2024-12-13T01:38:25.868666864Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:38:25.868702 containerd[1476]: time="2024-12-13T01:38:25.868684788Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:38:25.868732 containerd[1476]: time="2024-12-13T01:38:25.868701299Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:38:25.868838 containerd[1476]: time="2024-12-13T01:38:25.868818368Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:38:25.869065 containerd[1476]: time="2024-12-13T01:38:25.869046717Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:38:25.869181 containerd[1476]: time="2024-12-13T01:38:25.869166561Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:38:25.869211 containerd[1476]: time="2024-12-13T01:38:25.869183723Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:38:25.869211 containerd[1476]: time="2024-12-13T01:38:25.869194894Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:38:25.869211 containerd[1476]: time="2024-12-13T01:38:25.869208039Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:38:25.869264 containerd[1476]: time="2024-12-13T01:38:25.869220112Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:38:25.869264 containerd[1476]: time="2024-12-13T01:38:25.869231443Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:38:25.869264 containerd[1476]: time="2024-12-13T01:38:25.869243806Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:38:25.869264 containerd[1476]: time="2024-12-13T01:38:25.869257332Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:38:25.869333 containerd[1476]: time="2024-12-13T01:38:25.869269274Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:38:25.869333 containerd[1476]: time="2024-12-13T01:38:25.869280986Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:38:25.869333 containerd[1476]: time="2024-12-13T01:38:25.869291155Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:38:25.869333 containerd[1476]: time="2024-12-13T01:38:25.869308898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:38:25.869333 containerd[1476]: time="2024-12-13T01:38:25.869321071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:38:25.869333 containerd[1476]: time="2024-12-13T01:38:25.869333023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:38:25.869437 containerd[1476]: time="2024-12-13T01:38:25.869344194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:38:25.869437 containerd[1476]: time="2024-12-13T01:38:25.869355546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:38:25.869437 containerd[1476]: time="2024-12-13T01:38:25.869367508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:38:25.869437 containerd[1476]: time="2024-12-13T01:38:25.869377888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:38:25.869437 containerd[1476]: time="2024-12-13T01:38:25.869393367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:38:25.869437 containerd[1476]: time="2024-12-13T01:38:25.869404898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:38:25.869437 containerd[1476]: time="2024-12-13T01:38:25.869421519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:38:25.869437 containerd[1476]: time="2024-12-13T01:38:25.869433342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:38:25.869575 containerd[1476]: time="2024-12-13T01:38:25.869444603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:38:25.869575 containerd[1476]: time="2024-12-13T01:38:25.869455774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:38:25.869575 containerd[1476]: time="2024-12-13T01:38:25.869469870Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:38:25.869575 containerd[1476]: time="2024-12-13T01:38:25.869489377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:38:25.869575 containerd[1476]: time="2024-12-13T01:38:25.869505547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:38:25.869575 containerd[1476]: time="2024-12-13T01:38:25.869523821Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:38:25.869575 containerd[1476]: time="2024-12-13T01:38:25.869567613Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:38:25.869693 containerd[1476]: time="2024-12-13T01:38:25.869582061Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:38:25.869693 containerd[1476]: time="2024-12-13T01:38:25.869592069Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:38:25.869693 containerd[1476]: time="2024-12-13T01:38:25.869603090Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:38:25.869693 containerd[1476]: time="2024-12-13T01:38:25.869613520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:38:25.869693 containerd[1476]: time="2024-12-13T01:38:25.869624901Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:38:25.869693 containerd[1476]: time="2024-12-13T01:38:25.869634940Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:38:25.869693 containerd[1476]: time="2024-12-13T01:38:25.869644277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:38:25.869962 containerd[1476]: time="2024-12-13T01:38:25.869911599Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:38:25.870093 containerd[1476]: time="2024-12-13T01:38:25.869963085Z" level=info msg="Connect containerd service" Dec 13 01:38:25.870093 containerd[1476]: time="2024-12-13T01:38:25.869990537Z" level=info msg="using legacy CRI server" Dec 13 01:38:25.870093 containerd[1476]: time="2024-12-13T01:38:25.870006947Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:38:25.870093 containerd[1476]: time="2024-12-13T01:38:25.870081597Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:38:25.870638 containerd[1476]: time="2024-12-13T01:38:25.870613695Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:38:25.870812 containerd[1476]: time="2024-12-13T01:38:25.870759559Z" level=info msg="Start subscribing containerd event" Dec 13 01:38:25.871019 containerd[1476]: time="2024-12-13T01:38:25.870989971Z" level=info msg="Start recovering state" Dec 13 01:38:25.871246 containerd[1476]: time="2024-12-13T01:38:25.871226214Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:38:25.871299 containerd[1476]: time="2024-12-13T01:38:25.871282389Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:38:25.873241 containerd[1476]: time="2024-12-13T01:38:25.873218961Z" level=info msg="Start event monitor" Dec 13 01:38:25.873619 containerd[1476]: time="2024-12-13T01:38:25.873294423Z" level=info msg="Start snapshots syncer" Dec 13 01:38:25.873619 containerd[1476]: time="2024-12-13T01:38:25.873308529Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:38:25.873619 containerd[1476]: time="2024-12-13T01:38:25.873332554Z" level=info msg="Start streaming server" Dec 13 01:38:25.873477 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:38:25.873777 containerd[1476]: time="2024-12-13T01:38:25.873761789Z" level=info msg="containerd successfully booted in 0.038109s" Dec 13 01:38:25.880310 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:38:25.889371 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:38:25.897347 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:38:25.897561 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:38:25.900238 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:38:25.916059 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:38:25.927606 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:38:25.930232 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:38:25.931696 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:38:27.124346 systemd-networkd[1409]: eth0: Gained IPv6LL Dec 13 01:38:27.127935 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:38:27.129998 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:38:27.138556 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:38:27.142008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:38:27.144990 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:38:27.170128 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:38:27.170406 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:38:27.172218 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:38:27.174887 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:38:27.792884 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:38:27.794614 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:38:27.798308 systemd[1]: Startup finished in 728ms (kernel) + 5.009s (initrd) + 4.591s (userspace) = 10.329s. Dec 13 01:38:27.800622 (kubelet)[1551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:38:28.252883 kubelet[1551]: E1213 01:38:28.252742 1551 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:38:28.256755 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:38:28.256963 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:38:32.601778 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:38:32.603308 systemd[1]: Started sshd@0-10.0.0.125:22-10.0.0.1:55684.service - OpenSSH per-connection server daemon (10.0.0.1:55684). Dec 13 01:38:32.657753 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 55684 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:38:32.660251 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:32.671664 systemd-logind[1452]: New session 1 of user core. Dec 13 01:38:32.673422 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:38:32.685556 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:38:32.700428 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:38:32.713543 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:38:32.717381 (systemd)[1569]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:38:32.851625 systemd[1569]: Queued start job for default target default.target. Dec 13 01:38:32.861498 systemd[1569]: Created slice app.slice - User Application Slice. Dec 13 01:38:32.861525 systemd[1569]: Reached target paths.target - Paths. Dec 13 01:38:32.861538 systemd[1569]: Reached target timers.target - Timers. Dec 13 01:38:32.863194 systemd[1569]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:38:32.875477 systemd[1569]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:38:32.875625 systemd[1569]: Reached target sockets.target - Sockets. Dec 13 01:38:32.875645 systemd[1569]: Reached target basic.target - Basic System. Dec 13 01:38:32.875689 systemd[1569]: Reached target default.target - Main User Target. Dec 13 01:38:32.875724 systemd[1569]: Startup finished in 149ms. Dec 13 01:38:32.876286 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:38:32.878172 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:38:32.950671 systemd[1]: Started sshd@1-10.0.0.125:22-10.0.0.1:55688.service - OpenSSH per-connection server daemon (10.0.0.1:55688). Dec 13 01:38:32.986194 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 55688 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:38:32.987939 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:32.992476 systemd-logind[1452]: New session 2 of user core. Dec 13 01:38:33.002356 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:38:33.058937 sshd[1580]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:33.069817 systemd[1]: sshd@1-10.0.0.125:22-10.0.0.1:55688.service: Deactivated successfully. Dec 13 01:38:33.071507 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:38:33.072887 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:38:33.074296 systemd[1]: Started sshd@2-10.0.0.125:22-10.0.0.1:55704.service - OpenSSH per-connection server daemon (10.0.0.1:55704). Dec 13 01:38:33.075120 systemd-logind[1452]: Removed session 2. Dec 13 01:38:33.114495 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 55704 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:38:33.116287 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:33.120339 systemd-logind[1452]: New session 3 of user core. Dec 13 01:38:33.130272 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:38:33.180694 sshd[1587]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:33.191127 systemd[1]: sshd@2-10.0.0.125:22-10.0.0.1:55704.service: Deactivated successfully. Dec 13 01:38:33.192767 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:38:33.194214 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:38:33.211586 systemd[1]: Started sshd@3-10.0.0.125:22-10.0.0.1:55714.service - OpenSSH per-connection server daemon (10.0.0.1:55714). Dec 13 01:38:33.212687 systemd-logind[1452]: Removed session 3. Dec 13 01:38:33.245348 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 55714 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:38:33.247181 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:33.251070 systemd-logind[1452]: New session 4 of user core. Dec 13 01:38:33.260274 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:38:33.313918 sshd[1594]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:33.327049 systemd[1]: sshd@3-10.0.0.125:22-10.0.0.1:55714.service: Deactivated successfully. Dec 13 01:38:33.328850 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:38:33.330416 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:38:33.331746 systemd[1]: Started sshd@4-10.0.0.125:22-10.0.0.1:55728.service - OpenSSH per-connection server daemon (10.0.0.1:55728). Dec 13 01:38:33.332674 systemd-logind[1452]: Removed session 4. Dec 13 01:38:33.370887 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 55728 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:38:33.372617 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:33.376659 systemd-logind[1452]: New session 5 of user core. Dec 13 01:38:33.386279 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:38:33.443753 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:38:33.444107 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:38:33.462711 sudo[1604]: pam_unix(sudo:session): session closed for user root Dec 13 01:38:33.465035 sshd[1601]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:33.481432 systemd[1]: sshd@4-10.0.0.125:22-10.0.0.1:55728.service: Deactivated successfully. Dec 13 01:38:33.483236 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:38:33.484872 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:38:33.495504 systemd[1]: Started sshd@5-10.0.0.125:22-10.0.0.1:55734.service - OpenSSH per-connection server daemon (10.0.0.1:55734). Dec 13 01:38:33.496582 systemd-logind[1452]: Removed session 5. Dec 13 01:38:33.530358 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 55734 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:38:33.532229 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:33.536878 systemd-logind[1452]: New session 6 of user core. Dec 13 01:38:33.544512 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:38:33.600224 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:38:33.600564 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:38:33.604674 sudo[1613]: pam_unix(sudo:session): session closed for user root Dec 13 01:38:33.611597 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:38:33.612017 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:38:33.631456 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:38:33.633571 auditctl[1616]: No rules Dec 13 01:38:33.635326 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:38:33.635656 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:38:33.638165 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:38:33.672559 augenrules[1634]: No rules Dec 13 01:38:33.674739 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:38:33.676508 sudo[1612]: pam_unix(sudo:session): session closed for user root Dec 13 01:38:33.678663 sshd[1609]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:33.691757 systemd[1]: sshd@5-10.0.0.125:22-10.0.0.1:55734.service: Deactivated successfully. Dec 13 01:38:33.695220 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:38:33.697793 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:38:33.705730 systemd[1]: Started sshd@6-10.0.0.125:22-10.0.0.1:55744.service - OpenSSH per-connection server daemon (10.0.0.1:55744). Dec 13 01:38:33.707218 systemd-logind[1452]: Removed session 6. Dec 13 01:38:33.745681 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 55744 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:38:33.747270 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:33.751445 systemd-logind[1452]: New session 7 of user core. Dec 13 01:38:33.761264 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:38:33.814168 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:38:33.814501 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:38:33.842717 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:38:33.861541 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:38:33.861792 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:38:34.406864 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:38:34.422410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:38:34.442910 systemd[1]: Reloading requested from client PID 1693 ('systemctl') (unit session-7.scope)... Dec 13 01:38:34.442931 systemd[1]: Reloading... Dec 13 01:38:34.534164 zram_generator::config[1731]: No configuration found. Dec 13 01:38:35.666058 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:38:35.746822 systemd[1]: Reloading finished in 1303 ms. Dec 13 01:38:35.802043 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:38:35.802178 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:38:35.802551 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:38:35.804686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:38:35.976056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:38:35.981957 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:38:36.027682 kubelet[1779]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:38:36.027682 kubelet[1779]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:38:36.027682 kubelet[1779]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:38:36.028866 kubelet[1779]: I1213 01:38:36.028808 1779 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:38:36.706290 kubelet[1779]: I1213 01:38:36.706235 1779 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:38:36.706290 kubelet[1779]: I1213 01:38:36.706265 1779 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:38:36.706481 kubelet[1779]: I1213 01:38:36.706461 1779 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:38:36.717539 kubelet[1779]: I1213 01:38:36.717504 1779 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:38:36.735343 kubelet[1779]: I1213 01:38:36.735298 1779 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:38:36.737003 kubelet[1779]: I1213 01:38:36.736954 1779 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:38:36.737176 kubelet[1779]: I1213 01:38:36.736988 1779 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.125","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:38:36.737592 kubelet[1779]: I1213 01:38:36.737567 1779 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:38:36.737592 kubelet[1779]: I1213 01:38:36.737584 1779 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:38:36.738353 kubelet[1779]: I1213 01:38:36.738328 1779 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:38:36.738913 kubelet[1779]: I1213 01:38:36.738888 1779 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:38:36.738913 kubelet[1779]: I1213 01:38:36.738906 1779 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:38:36.738966 kubelet[1779]: I1213 01:38:36.738927 1779 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:38:36.738966 kubelet[1779]: I1213 01:38:36.738946 1779 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:38:36.739019 kubelet[1779]: E1213 01:38:36.738975 1779 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:36.739044 kubelet[1779]: E1213 01:38:36.739028 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:36.742217 kubelet[1779]: I1213 01:38:36.742181 1779 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:38:36.743339 kubelet[1779]: W1213 01:38:36.743302 1779 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 01:38:36.743339 kubelet[1779]: E1213 01:38:36.743340 1779 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 01:38:36.743339 kubelet[1779]: I1213 01:38:36.743312 1779 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:38:36.743558 kubelet[1779]: W1213 01:38:36.743385 1779 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:38:36.743765 kubelet[1779]: W1213 01:38:36.743718 1779 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.125" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 01:38:36.743807 kubelet[1779]: E1213 01:38:36.743772 1779 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.125" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 01:38:36.744899 kubelet[1779]: I1213 01:38:36.744030 1779 server.go:1264] "Started kubelet" Dec 13 01:38:36.744899 kubelet[1779]: I1213 01:38:36.744837 1779 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:38:36.745264 kubelet[1779]: I1213 01:38:36.745249 1779 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:38:36.745338 kubelet[1779]: I1213 01:38:36.745317 1779 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:38:36.745584 kubelet[1779]: I1213 01:38:36.745541 1779 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:38:36.746954 kubelet[1779]: I1213 01:38:36.746770 1779 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:38:36.748508 kubelet[1779]: I1213 01:38:36.748488 1779 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:38:36.748583 kubelet[1779]: I1213 01:38:36.748565 1779 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:38:36.748631 kubelet[1779]: I1213 01:38:36.748611 1779 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:38:36.749667 kubelet[1779]: E1213 01:38:36.749647 1779 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:38:36.749789 kubelet[1779]: E1213 01:38:36.749546 1779 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.125.181098d990e6cf44 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.125,UID:10.0.0.125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.125,},FirstTimestamp:2024-12-13 01:38:36.744011588 +0000 UTC m=+0.757698687,LastTimestamp:2024-12-13 01:38:36.744011588 +0000 UTC m=+0.757698687,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.125,}" Dec 13 01:38:36.750006 kubelet[1779]: I1213 01:38:36.749700 1779 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:38:36.750115 kubelet[1779]: I1213 01:38:36.750088 1779 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:38:36.751131 kubelet[1779]: I1213 01:38:36.751110 1779 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:38:36.763107 kubelet[1779]: E1213 01:38:36.762973 1779 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.125.181098d9913caa19 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.125,UID:10.0.0.125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.125,},FirstTimestamp:2024-12-13 01:38:36.749638169 +0000 UTC m=+0.763325268,LastTimestamp:2024-12-13 01:38:36.749638169 +0000 UTC m=+0.763325268,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.125,}" Dec 13 01:38:36.765209 kubelet[1779]: E1213 01:38:36.764955 1779 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.125\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 01:38:36.765209 kubelet[1779]: W1213 01:38:36.765100 1779 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 01:38:36.765209 kubelet[1779]: E1213 01:38:36.765152 1779 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 01:38:36.766602 kubelet[1779]: I1213 01:38:36.766573 1779 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:38:36.766602 kubelet[1779]: I1213 01:38:36.766593 1779 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:38:36.766768 kubelet[1779]: I1213 01:38:36.766625 1779 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:38:36.767116 kubelet[1779]: E1213 01:38:36.767003 1779 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.125.181098d9922e47df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.125,UID:10.0.0.125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.125 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.125,},FirstTimestamp:2024-12-13 01:38:36.765472735 +0000 UTC m=+0.779159834,LastTimestamp:2024-12-13 01:38:36.765472735 +0000 UTC m=+0.779159834,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.125,}" Dec 13 01:38:36.770434 kubelet[1779]: E1213 01:38:36.770332 1779 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.125.181098d9922e5cb1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.125,UID:10.0.0.125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.125 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.125,},FirstTimestamp:2024-12-13 01:38:36.765478065 +0000 UTC m=+0.779165164,LastTimestamp:2024-12-13 01:38:36.765478065 +0000 UTC m=+0.779165164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.125,}" Dec 13 01:38:36.774955 kubelet[1779]: E1213 01:38:36.774844 1779 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.125.181098d9922e6cbd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.125,UID:10.0.0.125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.125 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.125,},FirstTimestamp:2024-12-13 01:38:36.765482173 +0000 UTC m=+0.779169272,LastTimestamp:2024-12-13 01:38:36.765482173 +0000 UTC m=+0.779169272,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.125,}" Dec 13 01:38:36.849932 kubelet[1779]: I1213 01:38:36.849881 1779 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.125" Dec 13 01:38:36.854503 kubelet[1779]: E1213 01:38:36.854366 1779 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.125.181098d9922e47df\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.125.181098d9922e47df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.125,UID:10.0.0.125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.125 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.125,},FirstTimestamp:2024-12-13 01:38:36.765472735 +0000 UTC m=+0.779159834,LastTimestamp:2024-12-13 01:38:36.849837729 +0000 UTC m=+0.863524818,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.125,}" Dec 13 01:38:36.854608 kubelet[1779]: E1213 01:38:36.854569 1779 kubelet_node_status.go:96] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.125" Dec 13 01:38:36.859106 kubelet[1779]: E1213 01:38:36.858954 1779 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.125.181098d9922e5cb1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.125.181098d9922e5cb1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.125,UID:10.0.0.125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.125 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.125,},FirstTimestamp:2024-12-13 01:38:36.765478065 +0000 UTC m=+0.779165164,LastTimestamp:2024-12-13 01:38:36.849850032 +0000 UTC m=+0.863537131,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.125,}" Dec 13 01:38:36.863733 kubelet[1779]: E1213 01:38:36.863567 1779 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.125.181098d9922e6cbd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.125.181098d9922e6cbd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.125,UID:10.0.0.125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.125 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.125,},FirstTimestamp:2024-12-13 01:38:36.765482173 +0000 UTC m=+0.779169272,LastTimestamp:2024-12-13 01:38:36.849853329 +0000 UTC m=+0.863540428,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.125,}" Dec 13 01:38:36.973518 kubelet[1779]: E1213 01:38:36.973386 1779 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.125\" not found" node="10.0.0.125" Dec 13 01:38:37.055983 kubelet[1779]: I1213 01:38:37.055927 1779 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.125" Dec 13 01:38:37.519170 kubelet[1779]: I1213 01:38:37.519112 1779 policy_none.go:49] "None policy: Start" Dec 13 01:38:37.622238 kubelet[1779]: I1213 01:38:37.622172 1779 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:38:37.622238 kubelet[1779]: I1213 01:38:37.622251 1779 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:38:37.623791 kubelet[1779]: I1213 01:38:37.623751 1779 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.125" Dec 13 01:38:37.625346 kubelet[1779]: I1213 01:38:37.625317 1779 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 01:38:37.626070 containerd[1476]: time="2024-12-13T01:38:37.626024146Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:38:37.626578 kubelet[1779]: I1213 01:38:37.626548 1779 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 01:38:37.704908 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:38:37.707871 kubelet[1779]: I1213 01:38:37.707839 1779 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 01:38:37.708058 kubelet[1779]: E1213 01:38:37.708032 1779 kubelet_node_status.go:544] "Error updating node status, will retry" err="failed to patch status \"{}\" for node \"10.0.0.125\": Patch \"https://10.0.0.121:6443/api/v1/nodes/10.0.0.125/status?timeout=10s\": read tcp 10.0.0.125:56550->10.0.0.121:6443: use of closed network connection" Dec 13 01:38:37.708166 kubelet[1779]: E1213 01:38:37.708028 1779 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.121:6443/api/v1/namespaces/default/events\": read tcp 10.0.0.125:56550->10.0.0.121:6443: use of closed network connection" event="&Event{ObjectMeta:{10.0.0.125.181098d9922e5cb1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.125,UID:10.0.0.125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.125 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.125,},FirstTimestamp:2024-12-13 01:38:36.765478065 +0000 UTC m=+0.779165164,LastTimestamp:2024-12-13 01:38:37.055887186 +0000 UTC m=+1.069574285,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.125,}" Dec 13 01:38:37.714459 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:38:37.718175 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:38:37.719603 kubelet[1779]: I1213 01:38:37.719560 1779 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:38:37.720891 kubelet[1779]: I1213 01:38:37.720853 1779 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:38:37.720936 kubelet[1779]: I1213 01:38:37.720896 1779 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:38:37.720936 kubelet[1779]: I1213 01:38:37.720914 1779 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:38:37.721005 kubelet[1779]: E1213 01:38:37.720960 1779 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:38:37.724482 kubelet[1779]: I1213 01:38:37.724456 1779 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:38:37.724834 kubelet[1779]: I1213 01:38:37.724701 1779 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:38:37.724892 kubelet[1779]: I1213 01:38:37.724847 1779 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:38:37.726567 kubelet[1779]: E1213 01:38:37.726506 1779 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.125\" not found" Dec 13 01:38:37.739469 kubelet[1779]: E1213 01:38:37.739421 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:37.742744 kubelet[1779]: E1213 01:38:37.742700 1779 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.125\" not found" Dec 13 01:38:37.829682 sudo[1645]: pam_unix(sudo:session): session closed for user root Dec 13 01:38:37.831667 sshd[1642]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:37.835445 systemd[1]: sshd@6-10.0.0.125:22-10.0.0.1:55744.service: Deactivated successfully. Dec 13 01:38:37.837601 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:38:37.838356 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:38:37.839399 systemd-logind[1452]: Removed session 7. Dec 13 01:38:37.843317 kubelet[1779]: E1213 01:38:37.843281 1779 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.125\" not found" Dec 13 01:38:37.944209 kubelet[1779]: E1213 01:38:37.944130 1779 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.125\" not found" Dec 13 01:38:38.044875 kubelet[1779]: E1213 01:38:38.044811 1779 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.125\" not found" Dec 13 01:38:38.146059 kubelet[1779]: E1213 01:38:38.145913 1779 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.125\" not found" Dec 13 01:38:38.740054 kubelet[1779]: E1213 01:38:38.739990 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:38.740054 kubelet[1779]: I1213 01:38:38.740017 1779 apiserver.go:52] "Watching apiserver" Dec 13 01:38:38.743820 kubelet[1779]: I1213 01:38:38.743784 1779 topology_manager.go:215] "Topology Admit Handler" podUID="f2498e88-ae49-4e71-a8a7-bbf1a0f47f02" podNamespace="kube-system" podName="kube-proxy-bzbts" Dec 13 01:38:38.743900 kubelet[1779]: I1213 01:38:38.743879 1779 topology_manager.go:215] "Topology Admit Handler" podUID="64dcec63-1870-49dc-96fc-07ccc1fe4fbe" podNamespace="kube-system" podName="cilium-nr7q4" Dec 13 01:38:38.750374 kubelet[1779]: I1213 01:38:38.750345 1779 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:38:38.750891 systemd[1]: Created slice kubepods-besteffort-podf2498e88_ae49_4e71_a8a7_bbf1a0f47f02.slice - libcontainer container kubepods-besteffort-podf2498e88_ae49_4e71_a8a7_bbf1a0f47f02.slice. Dec 13 01:38:38.761309 kubelet[1779]: I1213 01:38:38.761277 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-host-proc-sys-kernel\") pod \"cilium-nr7q4\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " pod="kube-system/cilium-nr7q4" Dec 13 01:38:38.761309 kubelet[1779]: I1213 01:38:38.761313 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2498e88-ae49-4e71-a8a7-bbf1a0f47f02-xtables-lock\") pod \"kube-proxy-bzbts\" (UID: \"f2498e88-ae49-4e71-a8a7-bbf1a0f47f02\") " pod="kube-system/kube-proxy-bzbts" Dec 13 01:38:38.761480 kubelet[1779]: I1213 01:38:38.761334 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k77rj\" (UniqueName: \"kubernetes.io/projected/f2498e88-ae49-4e71-a8a7-bbf1a0f47f02-kube-api-access-k77rj\") pod \"kube-proxy-bzbts\" (UID: \"f2498e88-ae49-4e71-a8a7-bbf1a0f47f02\") " pod="kube-system/kube-proxy-bzbts" Dec 13 01:38:38.761480 kubelet[1779]: I1213 01:38:38.761358 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-cni-path\") pod \"cilium-nr7q4\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " pod="kube-system/cilium-nr7q4" Dec 13 01:38:38.761480 kubelet[1779]: I1213 01:38:38.761378 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-etc-cni-netd\") pod \"cilium-nr7q4\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " pod="kube-system/cilium-nr7q4" Dec 13 01:38:38.761480 kubelet[1779]: I1213 01:38:38.761399 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-xtables-lock\") pod \"cilium-nr7q4\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " pod="kube-system/cilium-nr7q4" Dec 13 01:38:38.761480 kubelet[1779]: I1213 01:38:38.761418 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-bpf-maps\") pod \"cilium-nr7q4\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " pod="kube-system/cilium-nr7q4" Dec 13 01:38:38.761480 kubelet[1779]: I1213 01:38:38.761437 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-hubble-tls\") pod \"cilium-nr7q4\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " pod="kube-system/cilium-nr7q4" Dec 13 01:38:38.761667 kubelet[1779]: I1213 01:38:38.761455 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68g59\" (UniqueName: \"kubernetes.io/projected/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-kube-api-access-68g59\") pod \"cilium-nr7q4\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " pod="kube-system/cilium-nr7q4" Dec 13 01:38:38.761667 kubelet[1779]: I1213 01:38:38.761477 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f2498e88-ae49-4e71-a8a7-bbf1a0f47f02-kube-proxy\") pod \"kube-proxy-bzbts\" (UID: \"f2498e88-ae49-4e71-a8a7-bbf1a0f47f02\") " pod="kube-system/kube-proxy-bzbts" Dec 13 01:38:38.761667 kubelet[1779]: I1213 01:38:38.761519 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-hostproc\") pod \"cilium-nr7q4\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " pod="kube-system/cilium-nr7q4" Dec 13 01:38:38.761667 kubelet[1779]: I1213 01:38:38.761540 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-clustermesh-secrets\") pod \"cilium-nr7q4\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " pod="kube-system/cilium-nr7q4" Dec 13 01:38:38.761667 kubelet[1779]: I1213 01:38:38.761560 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-cilium-config-path\") pod \"cilium-nr7q4\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " pod="kube-system/cilium-nr7q4" Dec 13 01:38:38.761823 kubelet[1779]: I1213 01:38:38.761579 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-host-proc-sys-net\") pod \"cilium-nr7q4\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " pod="kube-system/cilium-nr7q4" Dec 13 01:38:38.761823 kubelet[1779]: I1213 01:38:38.761599 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2498e88-ae49-4e71-a8a7-bbf1a0f47f02-lib-modules\") pod \"kube-proxy-bzbts\" (UID: \"f2498e88-ae49-4e71-a8a7-bbf1a0f47f02\") " pod="kube-system/kube-proxy-bzbts" Dec 13 01:38:38.761823 kubelet[1779]: I1213 01:38:38.761637 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-cilium-run\") pod \"cilium-nr7q4\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " pod="kube-system/cilium-nr7q4" Dec 13 01:38:38.761823 kubelet[1779]: I1213 01:38:38.761659 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-cilium-cgroup\") pod \"cilium-nr7q4\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " pod="kube-system/cilium-nr7q4" Dec 13 01:38:38.761823 kubelet[1779]: I1213 01:38:38.761679 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-lib-modules\") pod \"cilium-nr7q4\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " pod="kube-system/cilium-nr7q4" Dec 13 01:38:38.764324 systemd[1]: Created slice kubepods-burstable-pod64dcec63_1870_49dc_96fc_07ccc1fe4fbe.slice - libcontainer container kubepods-burstable-pod64dcec63_1870_49dc_96fc_07ccc1fe4fbe.slice. Dec 13 01:38:39.061475 kubelet[1779]: E1213 01:38:39.061346 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:39.062103 containerd[1476]: time="2024-12-13T01:38:39.062061949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bzbts,Uid:f2498e88-ae49-4e71-a8a7-bbf1a0f47f02,Namespace:kube-system,Attempt:0,}" Dec 13 01:38:39.074087 kubelet[1779]: E1213 01:38:39.074040 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:39.074573 containerd[1476]: time="2024-12-13T01:38:39.074541814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nr7q4,Uid:64dcec63-1870-49dc-96fc-07ccc1fe4fbe,Namespace:kube-system,Attempt:0,}" Dec 13 01:38:39.741009 kubelet[1779]: E1213 01:38:39.740969 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:39.763310 containerd[1476]: time="2024-12-13T01:38:39.763245529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:38:39.764389 containerd[1476]: time="2024-12-13T01:38:39.764353026Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:38:39.765397 containerd[1476]: time="2024-12-13T01:38:39.765325730Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:38:39.766212 containerd[1476]: time="2024-12-13T01:38:39.766160285Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:38:39.767194 containerd[1476]: time="2024-12-13T01:38:39.767165590Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:38:39.771942 containerd[1476]: time="2024-12-13T01:38:39.771903115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:38:39.773070 containerd[1476]: time="2024-12-13T01:38:39.773037683Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 710.897858ms" Dec 13 01:38:39.774009 containerd[1476]: time="2024-12-13T01:38:39.773972466Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 699.331026ms" Dec 13 01:38:39.868714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3222519113.mount: Deactivated successfully. Dec 13 01:38:40.045422 containerd[1476]: time="2024-12-13T01:38:40.045232054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:38:40.045422 containerd[1476]: time="2024-12-13T01:38:40.045281397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:38:40.045422 containerd[1476]: time="2024-12-13T01:38:40.045315250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:38:40.046160 containerd[1476]: time="2024-12-13T01:38:40.044846171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:38:40.046160 containerd[1476]: time="2024-12-13T01:38:40.045920525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:38:40.046160 containerd[1476]: time="2024-12-13T01:38:40.045937748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:38:40.046160 containerd[1476]: time="2024-12-13T01:38:40.046029440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:38:40.046692 containerd[1476]: time="2024-12-13T01:38:40.046638021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:38:40.231359 systemd[1]: Started cri-containerd-fdf4381d4e659f794388b101195478a8eb36c38e64f1688fbc3e3676d158f9c4.scope - libcontainer container fdf4381d4e659f794388b101195478a8eb36c38e64f1688fbc3e3676d158f9c4. Dec 13 01:38:40.236487 systemd[1]: Started cri-containerd-2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13.scope - libcontainer container 2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13. Dec 13 01:38:40.272882 containerd[1476]: time="2024-12-13T01:38:40.272834079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bzbts,Uid:f2498e88-ae49-4e71-a8a7-bbf1a0f47f02,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdf4381d4e659f794388b101195478a8eb36c38e64f1688fbc3e3676d158f9c4\"" Dec 13 01:38:40.273650 containerd[1476]: time="2024-12-13T01:38:40.273621736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nr7q4,Uid:64dcec63-1870-49dc-96fc-07ccc1fe4fbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13\"" Dec 13 01:38:40.275156 kubelet[1779]: E1213 01:38:40.275104 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:40.275602 kubelet[1779]: E1213 01:38:40.275308 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:40.277969 containerd[1476]: time="2024-12-13T01:38:40.277919967Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 01:38:40.741998 kubelet[1779]: E1213 01:38:40.741942 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:41.742399 kubelet[1779]: E1213 01:38:41.742361 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:41.750033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2576984325.mount: Deactivated successfully. Dec 13 01:38:42.231786 containerd[1476]: time="2024-12-13T01:38:42.231657372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:38:42.232868 containerd[1476]: time="2024-12-13T01:38:42.232829180Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Dec 13 01:38:42.234101 containerd[1476]: time="2024-12-13T01:38:42.234067672Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:38:42.236438 containerd[1476]: time="2024-12-13T01:38:42.236392913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:38:42.237477 containerd[1476]: time="2024-12-13T01:38:42.237355128Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.959387151s" Dec 13 01:38:42.237477 containerd[1476]: time="2024-12-13T01:38:42.237453973Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 01:38:42.238769 containerd[1476]: time="2024-12-13T01:38:42.238732721Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:38:42.240224 containerd[1476]: time="2024-12-13T01:38:42.240192128Z" level=info msg="CreateContainer within sandbox \"fdf4381d4e659f794388b101195478a8eb36c38e64f1688fbc3e3676d158f9c4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:38:42.303824 containerd[1476]: time="2024-12-13T01:38:42.303763026Z" level=info msg="CreateContainer within sandbox \"fdf4381d4e659f794388b101195478a8eb36c38e64f1688fbc3e3676d158f9c4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d1a28b83d237bdf817e195e802798b9854e07579b978b532ed129cb81d5ef3f2\"" Dec 13 01:38:42.304435 containerd[1476]: time="2024-12-13T01:38:42.304394711Z" level=info msg="StartContainer for \"d1a28b83d237bdf817e195e802798b9854e07579b978b532ed129cb81d5ef3f2\"" Dec 13 01:38:42.364325 systemd[1]: Started cri-containerd-d1a28b83d237bdf817e195e802798b9854e07579b978b532ed129cb81d5ef3f2.scope - libcontainer container d1a28b83d237bdf817e195e802798b9854e07579b978b532ed129cb81d5ef3f2. Dec 13 01:38:42.490741 containerd[1476]: time="2024-12-13T01:38:42.490631932Z" level=info msg="StartContainer for \"d1a28b83d237bdf817e195e802798b9854e07579b978b532ed129cb81d5ef3f2\" returns successfully" Dec 13 01:38:42.732569 kubelet[1779]: E1213 01:38:42.732525 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:42.743573 kubelet[1779]: E1213 01:38:42.743448 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:42.767747 kubelet[1779]: I1213 01:38:42.767686 1779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bzbts" podStartSLOduration=3.805908003 podStartE2EDuration="5.767672802s" podCreationTimestamp="2024-12-13 01:38:37 +0000 UTC" firstStartedPulling="2024-12-13 01:38:40.276690491 +0000 UTC m=+4.290377601" lastFinishedPulling="2024-12-13 01:38:42.238455301 +0000 UTC m=+6.252142400" observedRunningTime="2024-12-13 01:38:42.767431189 +0000 UTC m=+6.781118288" watchObservedRunningTime="2024-12-13 01:38:42.767672802 +0000 UTC m=+6.781359902" Dec 13 01:38:43.733668 kubelet[1779]: E1213 01:38:43.733621 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:43.743943 kubelet[1779]: E1213 01:38:43.743902 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:44.745064 kubelet[1779]: E1213 01:38:44.745013 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:45.745996 kubelet[1779]: E1213 01:38:45.745926 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:46.746955 kubelet[1779]: E1213 01:38:46.746909 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:47.747838 kubelet[1779]: E1213 01:38:47.747753 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:48.748738 kubelet[1779]: E1213 01:38:48.748665 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:49.749773 kubelet[1779]: E1213 01:38:49.749729 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:50.750552 kubelet[1779]: E1213 01:38:50.750503 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:50.804327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount803995161.mount: Deactivated successfully. Dec 13 01:38:51.750881 kubelet[1779]: E1213 01:38:51.750843 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:52.752704 kubelet[1779]: E1213 01:38:52.752645 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:53.753486 kubelet[1779]: E1213 01:38:53.753420 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:53.856801 containerd[1476]: time="2024-12-13T01:38:53.856719984Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:38:53.857574 containerd[1476]: time="2024-12-13T01:38:53.857521978Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735319" Dec 13 01:38:53.859102 containerd[1476]: time="2024-12-13T01:38:53.859058289Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:38:53.861111 containerd[1476]: time="2024-12-13T01:38:53.861046328Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.622265006s" Dec 13 01:38:53.861111 containerd[1476]: time="2024-12-13T01:38:53.861103485Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:38:53.863383 containerd[1476]: time="2024-12-13T01:38:53.863336292Z" level=info msg="CreateContainer within sandbox \"2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:38:53.887425 containerd[1476]: time="2024-12-13T01:38:53.887360246Z" level=info msg="CreateContainer within sandbox \"2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e\"" Dec 13 01:38:53.888439 containerd[1476]: time="2024-12-13T01:38:53.888380850Z" level=info msg="StartContainer for \"8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e\"" Dec 13 01:38:53.927362 systemd[1]: Started cri-containerd-8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e.scope - libcontainer container 8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e. Dec 13 01:38:53.960753 containerd[1476]: time="2024-12-13T01:38:53.960669917Z" level=info msg="StartContainer for \"8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e\" returns successfully" Dec 13 01:38:53.973207 systemd[1]: cri-containerd-8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e.scope: Deactivated successfully. Dec 13 01:38:54.054955 kubelet[1779]: E1213 01:38:54.054807 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:54.341196 containerd[1476]: time="2024-12-13T01:38:54.341024097Z" level=info msg="shim disconnected" id=8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e namespace=k8s.io Dec 13 01:38:54.341196 containerd[1476]: time="2024-12-13T01:38:54.341085372Z" level=warning msg="cleaning up after shim disconnected" id=8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e namespace=k8s.io Dec 13 01:38:54.341196 containerd[1476]: time="2024-12-13T01:38:54.341095491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:54.753993 kubelet[1779]: E1213 01:38:54.753937 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:54.872950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e-rootfs.mount: Deactivated successfully. Dec 13 01:38:55.057611 kubelet[1779]: E1213 01:38:55.057470 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:55.059727 containerd[1476]: time="2024-12-13T01:38:55.059669494Z" level=info msg="CreateContainer within sandbox \"2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:38:55.079176 containerd[1476]: time="2024-12-13T01:38:55.079098370Z" level=info msg="CreateContainer within sandbox \"2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1\"" Dec 13 01:38:55.079718 containerd[1476]: time="2024-12-13T01:38:55.079662278Z" level=info msg="StartContainer for \"b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1\"" Dec 13 01:38:55.118334 systemd[1]: Started cri-containerd-b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1.scope - libcontainer container b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1. Dec 13 01:38:55.147061 containerd[1476]: time="2024-12-13T01:38:55.146999980Z" level=info msg="StartContainer for \"b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1\" returns successfully" Dec 13 01:38:55.159994 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:38:55.160297 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:38:55.160380 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:38:55.169688 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:38:55.170100 systemd[1]: cri-containerd-b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1.scope: Deactivated successfully. Dec 13 01:38:55.191648 containerd[1476]: time="2024-12-13T01:38:55.191579993Z" level=info msg="shim disconnected" id=b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1 namespace=k8s.io Dec 13 01:38:55.191648 containerd[1476]: time="2024-12-13T01:38:55.191635698Z" level=warning msg="cleaning up after shim disconnected" id=b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1 namespace=k8s.io Dec 13 01:38:55.191648 containerd[1476]: time="2024-12-13T01:38:55.191644765Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:55.193786 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:38:55.754326 kubelet[1779]: E1213 01:38:55.754280 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:55.873758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1-rootfs.mount: Deactivated successfully. Dec 13 01:38:56.060304 kubelet[1779]: E1213 01:38:56.060192 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:56.061931 containerd[1476]: time="2024-12-13T01:38:56.061883733Z" level=info msg="CreateContainer within sandbox \"2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:38:56.078419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3341179847.mount: Deactivated successfully. Dec 13 01:38:56.080511 containerd[1476]: time="2024-12-13T01:38:56.080454290Z" level=info msg="CreateContainer within sandbox \"2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49\"" Dec 13 01:38:56.081067 containerd[1476]: time="2024-12-13T01:38:56.081040068Z" level=info msg="StartContainer for \"2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49\"" Dec 13 01:38:56.110335 systemd[1]: Started cri-containerd-2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49.scope - libcontainer container 2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49. Dec 13 01:38:56.144757 systemd[1]: cri-containerd-2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49.scope: Deactivated successfully. Dec 13 01:38:56.145245 containerd[1476]: time="2024-12-13T01:38:56.144914736Z" level=info msg="StartContainer for \"2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49\" returns successfully" Dec 13 01:38:56.170829 containerd[1476]: time="2024-12-13T01:38:56.170761378Z" level=info msg="shim disconnected" id=2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49 namespace=k8s.io Dec 13 01:38:56.170829 containerd[1476]: time="2024-12-13T01:38:56.170811712Z" level=warning msg="cleaning up after shim disconnected" id=2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49 namespace=k8s.io Dec 13 01:38:56.170829 containerd[1476]: time="2024-12-13T01:38:56.170821991Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:56.740167 kubelet[1779]: E1213 01:38:56.740059 1779 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:56.754755 kubelet[1779]: E1213 01:38:56.754641 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:56.873669 systemd[1]: run-containerd-runc-k8s.io-2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49-runc.elhrUI.mount: Deactivated successfully. Dec 13 01:38:56.873819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49-rootfs.mount: Deactivated successfully. Dec 13 01:38:57.064900 kubelet[1779]: E1213 01:38:57.064757 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:57.066842 containerd[1476]: time="2024-12-13T01:38:57.066796534Z" level=info msg="CreateContainer within sandbox \"2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:38:57.082936 containerd[1476]: time="2024-12-13T01:38:57.082869937Z" level=info msg="CreateContainer within sandbox \"2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e\"" Dec 13 01:38:57.083646 containerd[1476]: time="2024-12-13T01:38:57.083609814Z" level=info msg="StartContainer for \"d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e\"" Dec 13 01:38:57.123483 systemd[1]: Started cri-containerd-d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e.scope - libcontainer container d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e. Dec 13 01:38:57.151827 systemd[1]: cri-containerd-d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e.scope: Deactivated successfully. Dec 13 01:38:57.188867 containerd[1476]: time="2024-12-13T01:38:57.188800875Z" level=info msg="StartContainer for \"d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e\" returns successfully" Dec 13 01:38:57.217096 containerd[1476]: time="2024-12-13T01:38:57.217015468Z" level=info msg="shim disconnected" id=d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e namespace=k8s.io Dec 13 01:38:57.217096 containerd[1476]: time="2024-12-13T01:38:57.217086541Z" level=warning msg="cleaning up after shim disconnected" id=d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e namespace=k8s.io Dec 13 01:38:57.217096 containerd[1476]: time="2024-12-13T01:38:57.217097842Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:57.755600 kubelet[1779]: E1213 01:38:57.755522 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:57.873276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e-rootfs.mount: Deactivated successfully. Dec 13 01:38:58.068867 kubelet[1779]: E1213 01:38:58.068730 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:58.070909 containerd[1476]: time="2024-12-13T01:38:58.070876654Z" level=info msg="CreateContainer within sandbox \"2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:38:58.091824 containerd[1476]: time="2024-12-13T01:38:58.089782971Z" level=info msg="CreateContainer within sandbox \"2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb\"" Dec 13 01:38:58.092687 containerd[1476]: time="2024-12-13T01:38:58.092649595Z" level=info msg="StartContainer for \"898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb\"" Dec 13 01:38:58.122323 systemd[1]: Started cri-containerd-898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb.scope - libcontainer container 898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb. Dec 13 01:38:58.157945 containerd[1476]: time="2024-12-13T01:38:58.157886411Z" level=info msg="StartContainer for \"898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb\" returns successfully" Dec 13 01:38:58.297178 kubelet[1779]: I1213 01:38:58.297060 1779 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:38:58.745186 kernel: Initializing XFRM netlink socket Dec 13 01:38:58.756479 kubelet[1779]: E1213 01:38:58.756424 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:38:59.074305 kubelet[1779]: E1213 01:38:59.074128 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:59.413671 kubelet[1779]: I1213 01:38:59.413591 1779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nr7q4" podStartSLOduration=8.828494114 podStartE2EDuration="22.413567877s" podCreationTimestamp="2024-12-13 01:38:37 +0000 UTC" firstStartedPulling="2024-12-13 01:38:40.276949587 +0000 UTC m=+4.290636696" lastFinishedPulling="2024-12-13 01:38:53.86202336 +0000 UTC m=+17.875710459" observedRunningTime="2024-12-13 01:38:59.087299991 +0000 UTC m=+23.100987100" watchObservedRunningTime="2024-12-13 01:38:59.413567877 +0000 UTC m=+23.427254976" Dec 13 01:38:59.413968 kubelet[1779]: I1213 01:38:59.413940 1779 topology_manager.go:215] "Topology Admit Handler" podUID="248256e0-468c-4302-b5f0-6f3968c7d729" podNamespace="default" podName="nginx-deployment-85f456d6dd-p2cn9" Dec 13 01:38:59.419977 systemd[1]: Created slice kubepods-besteffort-pod248256e0_468c_4302_b5f0_6f3968c7d729.slice - libcontainer container kubepods-besteffort-pod248256e0_468c_4302_b5f0_6f3968c7d729.slice. Dec 13 01:38:59.517514 kubelet[1779]: I1213 01:38:59.517436 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98kz4\" (UniqueName: \"kubernetes.io/projected/248256e0-468c-4302-b5f0-6f3968c7d729-kube-api-access-98kz4\") pod \"nginx-deployment-85f456d6dd-p2cn9\" (UID: \"248256e0-468c-4302-b5f0-6f3968c7d729\") " pod="default/nginx-deployment-85f456d6dd-p2cn9" Dec 13 01:38:59.723974 containerd[1476]: time="2024-12-13T01:38:59.723850713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-p2cn9,Uid:248256e0-468c-4302-b5f0-6f3968c7d729,Namespace:default,Attempt:0,}" Dec 13 01:38:59.757076 kubelet[1779]: E1213 01:38:59.757020 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:00.075256 kubelet[1779]: E1213 01:39:00.075092 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:39:00.454336 systemd-networkd[1409]: cilium_host: Link UP Dec 13 01:39:00.454709 systemd-networkd[1409]: cilium_net: Link UP Dec 13 01:39:00.454714 systemd-networkd[1409]: cilium_net: Gained carrier Dec 13 01:39:00.455081 systemd-networkd[1409]: cilium_host: Gained carrier Dec 13 01:39:00.588934 systemd-networkd[1409]: cilium_vxlan: Link UP Dec 13 01:39:00.588947 systemd-networkd[1409]: cilium_vxlan: Gained carrier Dec 13 01:39:00.757539 kubelet[1779]: E1213 01:39:00.757370 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:00.908243 kernel: NET: Registered PF_ALG protocol family Dec 13 01:39:01.076168 kubelet[1779]: E1213 01:39:01.075993 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:39:01.108345 systemd-networkd[1409]: cilium_host: Gained IPv6LL Dec 13 01:39:01.364592 systemd-networkd[1409]: cilium_net: Gained IPv6LL Dec 13 01:39:01.642898 systemd-networkd[1409]: lxc_health: Link UP Dec 13 01:39:01.651367 systemd-networkd[1409]: lxc_health: Gained carrier Dec 13 01:39:01.757736 kubelet[1779]: E1213 01:39:01.757587 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:01.763056 systemd-networkd[1409]: lxc022e45c0433a: Link UP Dec 13 01:39:01.778181 kernel: eth0: renamed from tmp96e7d Dec 13 01:39:01.785053 systemd-networkd[1409]: lxc022e45c0433a: Gained carrier Dec 13 01:39:02.068394 systemd-networkd[1409]: cilium_vxlan: Gained IPv6LL Dec 13 01:39:02.758066 kubelet[1779]: E1213 01:39:02.758010 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:03.076090 kubelet[1779]: E1213 01:39:03.075942 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:39:03.079909 kubelet[1779]: E1213 01:39:03.079870 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:39:03.092360 systemd-networkd[1409]: lxc_health: Gained IPv6LL Dec 13 01:39:03.412395 systemd-networkd[1409]: lxc022e45c0433a: Gained IPv6LL Dec 13 01:39:03.758916 kubelet[1779]: E1213 01:39:03.758766 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:04.081872 kubelet[1779]: E1213 01:39:04.081758 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:39:04.759520 kubelet[1779]: E1213 01:39:04.759454 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:05.435028 containerd[1476]: time="2024-12-13T01:39:05.434899881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:39:05.435028 containerd[1476]: time="2024-12-13T01:39:05.434979614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:39:05.435028 containerd[1476]: time="2024-12-13T01:39:05.434991216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:39:05.435532 containerd[1476]: time="2024-12-13T01:39:05.435087921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:39:05.458918 systemd[1]: run-containerd-runc-k8s.io-96e7d70342b3b85b957db43c33aa9e4cd84a1a338fa469647b5f43b3b2de1cdb-runc.FpZzG3.mount: Deactivated successfully. Dec 13 01:39:05.470293 systemd[1]: Started cri-containerd-96e7d70342b3b85b957db43c33aa9e4cd84a1a338fa469647b5f43b3b2de1cdb.scope - libcontainer container 96e7d70342b3b85b957db43c33aa9e4cd84a1a338fa469647b5f43b3b2de1cdb. Dec 13 01:39:05.484438 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:39:05.510471 containerd[1476]: time="2024-12-13T01:39:05.510425607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-p2cn9,Uid:248256e0-468c-4302-b5f0-6f3968c7d729,Namespace:default,Attempt:0,} returns sandbox id \"96e7d70342b3b85b957db43c33aa9e4cd84a1a338fa469647b5f43b3b2de1cdb\"" Dec 13 01:39:05.512035 containerd[1476]: time="2024-12-13T01:39:05.511993152Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:39:05.760400 kubelet[1779]: E1213 01:39:05.760275 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:06.761037 kubelet[1779]: E1213 01:39:06.760975 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:07.761872 kubelet[1779]: E1213 01:39:07.761813 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:08.762496 kubelet[1779]: E1213 01:39:08.762437 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:08.803222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount276152154.mount: Deactivated successfully. Dec 13 01:39:09.763377 kubelet[1779]: E1213 01:39:09.763337 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:09.919787 containerd[1476]: time="2024-12-13T01:39:09.919726104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:39:09.920531 containerd[1476]: time="2024-12-13T01:39:09.920494386Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036027" Dec 13 01:39:09.921636 containerd[1476]: time="2024-12-13T01:39:09.921595080Z" level=info msg="ImageCreate event name:\"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:39:09.923994 containerd[1476]: time="2024-12-13T01:39:09.923962324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:39:09.924855 containerd[1476]: time="2024-12-13T01:39:09.924817702Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 4.412794171s" Dec 13 01:39:09.924855 containerd[1476]: time="2024-12-13T01:39:09.924844342Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 01:39:09.926793 containerd[1476]: time="2024-12-13T01:39:09.926761329Z" level=info msg="CreateContainer within sandbox \"96e7d70342b3b85b957db43c33aa9e4cd84a1a338fa469647b5f43b3b2de1cdb\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 01:39:09.943932 containerd[1476]: time="2024-12-13T01:39:09.943855950Z" level=info msg="CreateContainer within sandbox \"96e7d70342b3b85b957db43c33aa9e4cd84a1a338fa469647b5f43b3b2de1cdb\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"71c4aabf515e677f3a779b1828d05c16337ee1e0e3e696b076c826198574175d\"" Dec 13 01:39:09.944450 containerd[1476]: time="2024-12-13T01:39:09.944419111Z" level=info msg="StartContainer for \"71c4aabf515e677f3a779b1828d05c16337ee1e0e3e696b076c826198574175d\"" Dec 13 01:39:09.972251 systemd[1]: run-containerd-runc-k8s.io-71c4aabf515e677f3a779b1828d05c16337ee1e0e3e696b076c826198574175d-runc.Es9ch3.mount: Deactivated successfully. Dec 13 01:39:09.984314 systemd[1]: Started cri-containerd-71c4aabf515e677f3a779b1828d05c16337ee1e0e3e696b076c826198574175d.scope - libcontainer container 71c4aabf515e677f3a779b1828d05c16337ee1e0e3e696b076c826198574175d. Dec 13 01:39:10.112582 containerd[1476]: time="2024-12-13T01:39:10.112515237Z" level=info msg="StartContainer for \"71c4aabf515e677f3a779b1828d05c16337ee1e0e3e696b076c826198574175d\" returns successfully" Dec 13 01:39:10.174572 kubelet[1779]: I1213 01:39:10.174488 1779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-p2cn9" podStartSLOduration=6.76031413 podStartE2EDuration="11.17447197s" podCreationTimestamp="2024-12-13 01:38:59 +0000 UTC" firstStartedPulling="2024-12-13 01:39:05.511541779 +0000 UTC m=+29.525228878" lastFinishedPulling="2024-12-13 01:39:09.925699619 +0000 UTC m=+33.939386718" observedRunningTime="2024-12-13 01:39:10.174385566 +0000 UTC m=+34.188072655" watchObservedRunningTime="2024-12-13 01:39:10.17447197 +0000 UTC m=+34.188159069" Dec 13 01:39:10.469957 update_engine[1455]: I20241213 01:39:10.469734 1455 update_attempter.cc:509] Updating boot flags... Dec 13 01:39:10.503170 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2982) Dec 13 01:39:10.537183 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2981) Dec 13 01:39:10.572194 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2981) Dec 13 01:39:10.764117 kubelet[1779]: E1213 01:39:10.763968 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:11.765120 kubelet[1779]: E1213 01:39:11.765019 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:12.765745 kubelet[1779]: E1213 01:39:12.765685 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:13.766162 kubelet[1779]: E1213 01:39:13.766090 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:14.767287 kubelet[1779]: E1213 01:39:14.767232 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:15.768450 kubelet[1779]: E1213 01:39:15.768367 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:16.739688 kubelet[1779]: E1213 01:39:16.739591 1779 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:16.769005 kubelet[1779]: E1213 01:39:16.768904 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:17.554930 kubelet[1779]: I1213 01:39:17.554864 1779 topology_manager.go:215] "Topology Admit Handler" podUID="3dab260d-4afc-4c65-b3c9-1c50e8d17c3f" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 01:39:17.565599 systemd[1]: Created slice kubepods-besteffort-pod3dab260d_4afc_4c65_b3c9_1c50e8d17c3f.slice - libcontainer container kubepods-besteffort-pod3dab260d_4afc_4c65_b3c9_1c50e8d17c3f.slice. Dec 13 01:39:17.618443 kubelet[1779]: I1213 01:39:17.618363 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8wkv\" (UniqueName: \"kubernetes.io/projected/3dab260d-4afc-4c65-b3c9-1c50e8d17c3f-kube-api-access-r8wkv\") pod \"nfs-server-provisioner-0\" (UID: \"3dab260d-4afc-4c65-b3c9-1c50e8d17c3f\") " pod="default/nfs-server-provisioner-0" Dec 13 01:39:17.618443 kubelet[1779]: I1213 01:39:17.618425 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/3dab260d-4afc-4c65-b3c9-1c50e8d17c3f-data\") pod \"nfs-server-provisioner-0\" (UID: \"3dab260d-4afc-4c65-b3c9-1c50e8d17c3f\") " pod="default/nfs-server-provisioner-0" Dec 13 01:39:17.770193 kubelet[1779]: E1213 01:39:17.770081 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:17.871113 containerd[1476]: time="2024-12-13T01:39:17.870552617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3dab260d-4afc-4c65-b3c9-1c50e8d17c3f,Namespace:default,Attempt:0,}" Dec 13 01:39:17.943257 systemd-networkd[1409]: lxc778f6f64a363: Link UP Dec 13 01:39:17.952249 kernel: eth0: renamed from tmpf3164 Dec 13 01:39:17.960820 systemd-networkd[1409]: lxc778f6f64a363: Gained carrier Dec 13 01:39:18.199524 containerd[1476]: time="2024-12-13T01:39:18.199302594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:39:18.199524 containerd[1476]: time="2024-12-13T01:39:18.199359562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:39:18.199524 containerd[1476]: time="2024-12-13T01:39:18.199371885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:39:18.199524 containerd[1476]: time="2024-12-13T01:39:18.199474599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:39:18.219301 systemd[1]: Started cri-containerd-f3164062b102ef9f8e45927538240c9d2b389263a920d39574a2a2bce28041f5.scope - libcontainer container f3164062b102ef9f8e45927538240c9d2b389263a920d39574a2a2bce28041f5. Dec 13 01:39:18.230865 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:39:18.256261 containerd[1476]: time="2024-12-13T01:39:18.256190320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3dab260d-4afc-4c65-b3c9-1c50e8d17c3f,Namespace:default,Attempt:0,} returns sandbox id \"f3164062b102ef9f8e45927538240c9d2b389263a920d39574a2a2bce28041f5\"" Dec 13 01:39:18.257841 containerd[1476]: time="2024-12-13T01:39:18.257811105Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 01:39:18.771226 kubelet[1779]: E1213 01:39:18.771110 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:19.540521 systemd-networkd[1409]: lxc778f6f64a363: Gained IPv6LL Dec 13 01:39:19.771841 kubelet[1779]: E1213 01:39:19.771751 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:20.667757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount803106460.mount: Deactivated successfully. Dec 13 01:39:20.772718 kubelet[1779]: E1213 01:39:20.772656 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:21.773706 kubelet[1779]: E1213 01:39:21.773637 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:22.774160 kubelet[1779]: E1213 01:39:22.774062 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:23.004448 containerd[1476]: time="2024-12-13T01:39:23.004356151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:39:23.007021 containerd[1476]: time="2024-12-13T01:39:23.006698108Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Dec 13 01:39:23.008610 containerd[1476]: time="2024-12-13T01:39:23.008403506Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:39:23.012655 containerd[1476]: time="2024-12-13T01:39:23.012596455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:39:23.013729 containerd[1476]: time="2024-12-13T01:39:23.013652768Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.755790838s" Dec 13 01:39:23.013729 containerd[1476]: time="2024-12-13T01:39:23.013716798Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 01:39:23.016482 containerd[1476]: time="2024-12-13T01:39:23.016449863Z" level=info msg="CreateContainer within sandbox \"f3164062b102ef9f8e45927538240c9d2b389263a920d39574a2a2bce28041f5\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 01:39:23.030372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount532991137.mount: Deactivated successfully. Dec 13 01:39:23.041483 containerd[1476]: time="2024-12-13T01:39:23.041413564Z" level=info msg="CreateContainer within sandbox \"f3164062b102ef9f8e45927538240c9d2b389263a920d39574a2a2bce28041f5\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"3309a5c1e1b4719844f71df1902239e7ed36f3df28cff12a992fc587e1ce9cfd\"" Dec 13 01:39:23.042233 containerd[1476]: time="2024-12-13T01:39:23.042013645Z" level=info msg="StartContainer for \"3309a5c1e1b4719844f71df1902239e7ed36f3df28cff12a992fc587e1ce9cfd\"" Dec 13 01:39:23.119629 systemd[1]: Started cri-containerd-3309a5c1e1b4719844f71df1902239e7ed36f3df28cff12a992fc587e1ce9cfd.scope - libcontainer container 3309a5c1e1b4719844f71df1902239e7ed36f3df28cff12a992fc587e1ce9cfd. Dec 13 01:39:23.235608 containerd[1476]: time="2024-12-13T01:39:23.235529140Z" level=info msg="StartContainer for \"3309a5c1e1b4719844f71df1902239e7ed36f3df28cff12a992fc587e1ce9cfd\" returns successfully" Dec 13 01:39:23.775280 kubelet[1779]: E1213 01:39:23.775215 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:24.284454 kubelet[1779]: I1213 01:39:24.284386 1779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.527174498 podStartE2EDuration="7.284361711s" podCreationTimestamp="2024-12-13 01:39:17 +0000 UTC" firstStartedPulling="2024-12-13 01:39:18.257542557 +0000 UTC m=+42.271229656" lastFinishedPulling="2024-12-13 01:39:23.01472977 +0000 UTC m=+47.028416869" observedRunningTime="2024-12-13 01:39:24.284321285 +0000 UTC m=+48.298008384" watchObservedRunningTime="2024-12-13 01:39:24.284361711 +0000 UTC m=+48.298048820" Dec 13 01:39:24.775511 kubelet[1779]: E1213 01:39:24.775429 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:25.776427 kubelet[1779]: E1213 01:39:25.776363 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:26.777291 kubelet[1779]: E1213 01:39:26.777225 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:27.778335 kubelet[1779]: E1213 01:39:27.778274 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:28.778684 kubelet[1779]: E1213 01:39:28.778597 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:29.779104 kubelet[1779]: E1213 01:39:29.779020 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:30.780259 kubelet[1779]: E1213 01:39:30.780174 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:31.780860 kubelet[1779]: E1213 01:39:31.780796 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:32.781102 kubelet[1779]: E1213 01:39:32.781027 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:33.346028 kubelet[1779]: I1213 01:39:33.345949 1779 topology_manager.go:215] "Topology Admit Handler" podUID="d5b394a8-2634-4b55-9780-98a9968c15dd" podNamespace="default" podName="test-pod-1" Dec 13 01:39:33.352750 systemd[1]: Created slice kubepods-besteffort-podd5b394a8_2634_4b55_9780_98a9968c15dd.slice - libcontainer container kubepods-besteffort-podd5b394a8_2634_4b55_9780_98a9968c15dd.slice. Dec 13 01:39:33.502324 kubelet[1779]: I1213 01:39:33.502233 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4fc298f1-ca19-4959-863e-ac1da94561d6\" (UniqueName: \"kubernetes.io/nfs/d5b394a8-2634-4b55-9780-98a9968c15dd-pvc-4fc298f1-ca19-4959-863e-ac1da94561d6\") pod \"test-pod-1\" (UID: \"d5b394a8-2634-4b55-9780-98a9968c15dd\") " pod="default/test-pod-1" Dec 13 01:39:33.502324 kubelet[1779]: I1213 01:39:33.502306 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vfhq\" (UniqueName: \"kubernetes.io/projected/d5b394a8-2634-4b55-9780-98a9968c15dd-kube-api-access-7vfhq\") pod \"test-pod-1\" (UID: \"d5b394a8-2634-4b55-9780-98a9968c15dd\") " pod="default/test-pod-1" Dec 13 01:39:33.634187 kernel: FS-Cache: Loaded Dec 13 01:39:33.706661 kernel: RPC: Registered named UNIX socket transport module. Dec 13 01:39:33.706795 kernel: RPC: Registered udp transport module. Dec 13 01:39:33.706823 kernel: RPC: Registered tcp transport module. Dec 13 01:39:33.707259 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 01:39:33.708764 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 01:39:33.781278 kubelet[1779]: E1213 01:39:33.781211 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:34.023787 kernel: NFS: Registering the id_resolver key type Dec 13 01:39:34.023953 kernel: Key type id_resolver registered Dec 13 01:39:34.023981 kernel: Key type id_legacy registered Dec 13 01:39:34.056469 nfsidmap[3182]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 01:39:34.061864 nfsidmap[3185]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 01:39:34.256657 containerd[1476]: time="2024-12-13T01:39:34.256611327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d5b394a8-2634-4b55-9780-98a9968c15dd,Namespace:default,Attempt:0,}" Dec 13 01:39:34.663244 systemd-networkd[1409]: lxc715e2d3cda59: Link UP Dec 13 01:39:34.688162 kernel: eth0: renamed from tmpd2e15 Dec 13 01:39:34.692302 systemd-networkd[1409]: lxc715e2d3cda59: Gained carrier Dec 13 01:39:34.781660 kubelet[1779]: E1213 01:39:34.781589 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:34.967606 containerd[1476]: time="2024-12-13T01:39:34.967311879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:39:34.967606 containerd[1476]: time="2024-12-13T01:39:34.967422828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:39:34.967606 containerd[1476]: time="2024-12-13T01:39:34.967443687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:39:34.967871 containerd[1476]: time="2024-12-13T01:39:34.967551409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:39:34.994364 systemd[1]: Started cri-containerd-d2e15fd420a835faa3c9419331bd61023f9298378d75f1e2fccf9a80ff4a31a8.scope - libcontainer container d2e15fd420a835faa3c9419331bd61023f9298378d75f1e2fccf9a80ff4a31a8. Dec 13 01:39:35.008575 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:39:35.035776 containerd[1476]: time="2024-12-13T01:39:35.035686794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d5b394a8-2634-4b55-9780-98a9968c15dd,Namespace:default,Attempt:0,} returns sandbox id \"d2e15fd420a835faa3c9419331bd61023f9298378d75f1e2fccf9a80ff4a31a8\"" Dec 13 01:39:35.037747 containerd[1476]: time="2024-12-13T01:39:35.037724856Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:39:35.782402 kubelet[1779]: E1213 01:39:35.782337 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:35.834865 containerd[1476]: time="2024-12-13T01:39:35.834779039Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:39:35.837004 containerd[1476]: time="2024-12-13T01:39:35.836933000Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 01:39:35.839996 containerd[1476]: time="2024-12-13T01:39:35.839945124Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 802.187085ms" Dec 13 01:39:35.839996 containerd[1476]: time="2024-12-13T01:39:35.839988976Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 01:39:35.842173 containerd[1476]: time="2024-12-13T01:39:35.842112229Z" level=info msg="CreateContainer within sandbox \"d2e15fd420a835faa3c9419331bd61023f9298378d75f1e2fccf9a80ff4a31a8\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 01:39:35.864761 containerd[1476]: time="2024-12-13T01:39:35.864707196Z" level=info msg="CreateContainer within sandbox \"d2e15fd420a835faa3c9419331bd61023f9298378d75f1e2fccf9a80ff4a31a8\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"e78a03430fb4cf9d509fc996ea4024fce0529c69460f2742a7410ad18a728f7a\"" Dec 13 01:39:35.865412 containerd[1476]: time="2024-12-13T01:39:35.865375674Z" level=info msg="StartContainer for \"e78a03430fb4cf9d509fc996ea4024fce0529c69460f2742a7410ad18a728f7a\"" Dec 13 01:39:35.908441 systemd[1]: Started cri-containerd-e78a03430fb4cf9d509fc996ea4024fce0529c69460f2742a7410ad18a728f7a.scope - libcontainer container e78a03430fb4cf9d509fc996ea4024fce0529c69460f2742a7410ad18a728f7a. Dec 13 01:39:35.938880 containerd[1476]: time="2024-12-13T01:39:35.938837492Z" level=info msg="StartContainer for \"e78a03430fb4cf9d509fc996ea4024fce0529c69460f2742a7410ad18a728f7a\" returns successfully" Dec 13 01:39:36.372378 systemd-networkd[1409]: lxc715e2d3cda59: Gained IPv6LL Dec 13 01:39:36.739556 kubelet[1779]: E1213 01:39:36.739435 1779 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:36.783540 kubelet[1779]: E1213 01:39:36.783486 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:37.783926 kubelet[1779]: E1213 01:39:37.783848 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:38.784698 kubelet[1779]: E1213 01:39:38.784609 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:39.785660 kubelet[1779]: E1213 01:39:39.785547 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:40.786106 kubelet[1779]: E1213 01:39:40.786037 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:41.030426 kubelet[1779]: I1213 01:39:41.030357 1779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=23.22694383 podStartE2EDuration="24.030337495s" podCreationTimestamp="2024-12-13 01:39:17 +0000 UTC" firstStartedPulling="2024-12-13 01:39:35.037329382 +0000 UTC m=+59.051016471" lastFinishedPulling="2024-12-13 01:39:35.840723026 +0000 UTC m=+59.854410136" observedRunningTime="2024-12-13 01:39:36.183900033 +0000 UTC m=+60.197587162" watchObservedRunningTime="2024-12-13 01:39:41.030337495 +0000 UTC m=+65.044024594" Dec 13 01:39:41.056775 containerd[1476]: time="2024-12-13T01:39:41.056613441Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:39:41.064654 containerd[1476]: time="2024-12-13T01:39:41.064603954Z" level=info msg="StopContainer for \"898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb\" with timeout 2 (s)" Dec 13 01:39:41.064977 containerd[1476]: time="2024-12-13T01:39:41.064937480Z" level=info msg="Stop container \"898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb\" with signal terminated" Dec 13 01:39:41.071351 systemd-networkd[1409]: lxc_health: Link DOWN Dec 13 01:39:41.071362 systemd-networkd[1409]: lxc_health: Lost carrier Dec 13 01:39:41.099821 systemd[1]: cri-containerd-898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb.scope: Deactivated successfully. Dec 13 01:39:41.100411 systemd[1]: cri-containerd-898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb.scope: Consumed 7.626s CPU time. Dec 13 01:39:41.119991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb-rootfs.mount: Deactivated successfully. Dec 13 01:39:41.374056 containerd[1476]: time="2024-12-13T01:39:41.373907871Z" level=info msg="shim disconnected" id=898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb namespace=k8s.io Dec 13 01:39:41.374056 containerd[1476]: time="2024-12-13T01:39:41.373988222Z" level=warning msg="cleaning up after shim disconnected" id=898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb namespace=k8s.io Dec 13 01:39:41.374056 containerd[1476]: time="2024-12-13T01:39:41.373999733Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:39:41.553531 containerd[1476]: time="2024-12-13T01:39:41.553455855Z" level=info msg="StopContainer for \"898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb\" returns successfully" Dec 13 01:39:41.554359 containerd[1476]: time="2024-12-13T01:39:41.554329566Z" level=info msg="StopPodSandbox for \"2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13\"" Dec 13 01:39:41.554409 containerd[1476]: time="2024-12-13T01:39:41.554375022Z" level=info msg="Container to stop \"898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:39:41.554409 containerd[1476]: time="2024-12-13T01:39:41.554393346Z" level=info msg="Container to stop \"b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:39:41.554472 containerd[1476]: time="2024-12-13T01:39:41.554406301Z" level=info msg="Container to stop \"d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:39:41.554472 containerd[1476]: time="2024-12-13T01:39:41.554420768Z" level=info msg="Container to stop \"2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:39:41.554472 containerd[1476]: time="2024-12-13T01:39:41.554434073Z" level=info msg="Container to stop \"8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:39:41.556468 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13-shm.mount: Deactivated successfully. Dec 13 01:39:41.561265 systemd[1]: cri-containerd-2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13.scope: Deactivated successfully. Dec 13 01:39:41.580752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13-rootfs.mount: Deactivated successfully. Dec 13 01:39:41.641061 containerd[1476]: time="2024-12-13T01:39:41.640861480Z" level=info msg="shim disconnected" id=2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13 namespace=k8s.io Dec 13 01:39:41.641061 containerd[1476]: time="2024-12-13T01:39:41.640923989Z" level=warning msg="cleaning up after shim disconnected" id=2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13 namespace=k8s.io Dec 13 01:39:41.641061 containerd[1476]: time="2024-12-13T01:39:41.640933527Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:39:41.655474 containerd[1476]: time="2024-12-13T01:39:41.655418431Z" level=info msg="TearDown network for sandbox \"2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13\" successfully" Dec 13 01:39:41.655474 containerd[1476]: time="2024-12-13T01:39:41.655455611Z" level=info msg="StopPodSandbox for \"2716bfac78a07897ca6ca3f719563cffbb500a45bc197a3ded1355f18adb1e13\" returns successfully" Dec 13 01:39:41.752950 kubelet[1779]: I1213 01:39:41.752882 1779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-host-proc-sys-kernel\") pod \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " Dec 13 01:39:41.752950 kubelet[1779]: I1213 01:39:41.752897 1779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "64dcec63-1870-49dc-96fc-07ccc1fe4fbe" (UID: "64dcec63-1870-49dc-96fc-07ccc1fe4fbe"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:39:41.752950 kubelet[1779]: I1213 01:39:41.752955 1779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-lib-modules\") pod \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " Dec 13 01:39:41.752950 kubelet[1779]: I1213 01:39:41.752973 1779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-hostproc\") pod \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " Dec 13 01:39:41.752950 kubelet[1779]: I1213 01:39:41.752988 1779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-bpf-maps\") pod \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " Dec 13 01:39:41.753323 kubelet[1779]: I1213 01:39:41.753010 1779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-clustermesh-secrets\") pod \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " Dec 13 01:39:41.753323 kubelet[1779]: I1213 01:39:41.753024 1779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-cilium-run\") pod \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " Dec 13 01:39:41.753323 kubelet[1779]: I1213 01:39:41.753037 1779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-cilium-cgroup\") pod \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " Dec 13 01:39:41.753323 kubelet[1779]: I1213 01:39:41.753054 1779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-cni-path\") pod \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " Dec 13 01:39:41.753323 kubelet[1779]: I1213 01:39:41.753067 1779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-xtables-lock\") pod \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " Dec 13 01:39:41.753323 kubelet[1779]: I1213 01:39:41.753085 1779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-hubble-tls\") pod \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " Dec 13 01:39:41.753522 kubelet[1779]: I1213 01:39:41.753099 1779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68g59\" (UniqueName: \"kubernetes.io/projected/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-kube-api-access-68g59\") pod \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " Dec 13 01:39:41.753522 kubelet[1779]: I1213 01:39:41.753090 1779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-hostproc" (OuterVolumeSpecName: "hostproc") pod "64dcec63-1870-49dc-96fc-07ccc1fe4fbe" (UID: "64dcec63-1870-49dc-96fc-07ccc1fe4fbe"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:39:41.753522 kubelet[1779]: I1213 01:39:41.753116 1779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-cilium-config-path\") pod \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " Dec 13 01:39:41.753522 kubelet[1779]: I1213 01:39:41.753216 1779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-etc-cni-netd\") pod \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " Dec 13 01:39:41.753522 kubelet[1779]: I1213 01:39:41.753248 1779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-host-proc-sys-net\") pod \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\" (UID: \"64dcec63-1870-49dc-96fc-07ccc1fe4fbe\") " Dec 13 01:39:41.753522 kubelet[1779]: I1213 01:39:41.753291 1779 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-hostproc\") on node \"10.0.0.125\" DevicePath \"\"" Dec 13 01:39:41.753725 kubelet[1779]: I1213 01:39:41.753306 1779 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-host-proc-sys-kernel\") on node \"10.0.0.125\" DevicePath \"\"" Dec 13 01:39:41.753725 kubelet[1779]: I1213 01:39:41.753335 1779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "64dcec63-1870-49dc-96fc-07ccc1fe4fbe" (UID: "64dcec63-1870-49dc-96fc-07ccc1fe4fbe"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:39:41.753725 kubelet[1779]: I1213 01:39:41.753358 1779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "64dcec63-1870-49dc-96fc-07ccc1fe4fbe" (UID: "64dcec63-1870-49dc-96fc-07ccc1fe4fbe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:39:41.753725 kubelet[1779]: I1213 01:39:41.753378 1779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "64dcec63-1870-49dc-96fc-07ccc1fe4fbe" (UID: "64dcec63-1870-49dc-96fc-07ccc1fe4fbe"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:39:41.753725 kubelet[1779]: I1213 01:39:41.753399 1779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-cni-path" (OuterVolumeSpecName: "cni-path") pod "64dcec63-1870-49dc-96fc-07ccc1fe4fbe" (UID: "64dcec63-1870-49dc-96fc-07ccc1fe4fbe"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:39:41.753872 kubelet[1779]: I1213 01:39:41.753419 1779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "64dcec63-1870-49dc-96fc-07ccc1fe4fbe" (UID: "64dcec63-1870-49dc-96fc-07ccc1fe4fbe"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:39:41.753872 kubelet[1779]: I1213 01:39:41.753442 1779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "64dcec63-1870-49dc-96fc-07ccc1fe4fbe" (UID: "64dcec63-1870-49dc-96fc-07ccc1fe4fbe"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:39:41.753872 kubelet[1779]: I1213 01:39:41.753472 1779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "64dcec63-1870-49dc-96fc-07ccc1fe4fbe" (UID: "64dcec63-1870-49dc-96fc-07ccc1fe4fbe"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:39:41.755281 kubelet[1779]: I1213 01:39:41.755252 1779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "64dcec63-1870-49dc-96fc-07ccc1fe4fbe" (UID: "64dcec63-1870-49dc-96fc-07ccc1fe4fbe"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:39:41.757405 kubelet[1779]: I1213 01:39:41.756780 1779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "64dcec63-1870-49dc-96fc-07ccc1fe4fbe" (UID: "64dcec63-1870-49dc-96fc-07ccc1fe4fbe"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:39:41.757467 kubelet[1779]: I1213 01:39:41.757419 1779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "64dcec63-1870-49dc-96fc-07ccc1fe4fbe" (UID: "64dcec63-1870-49dc-96fc-07ccc1fe4fbe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:39:41.757656 kubelet[1779]: I1213 01:39:41.757607 1779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-kube-api-access-68g59" (OuterVolumeSpecName: "kube-api-access-68g59") pod "64dcec63-1870-49dc-96fc-07ccc1fe4fbe" (UID: "64dcec63-1870-49dc-96fc-07ccc1fe4fbe"). InnerVolumeSpecName "kube-api-access-68g59". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:39:41.757899 systemd[1]: var-lib-kubelet-pods-64dcec63\x2d1870\x2d49dc\x2d96fc\x2d07ccc1fe4fbe-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:39:41.758758 kubelet[1779]: I1213 01:39:41.758693 1779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "64dcec63-1870-49dc-96fc-07ccc1fe4fbe" (UID: "64dcec63-1870-49dc-96fc-07ccc1fe4fbe"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:39:41.787277 kubelet[1779]: E1213 01:39:41.787206 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:41.853893 kubelet[1779]: I1213 01:39:41.853817 1779 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-clustermesh-secrets\") on node \"10.0.0.125\" DevicePath \"\"" Dec 13 01:39:41.853893 kubelet[1779]: I1213 01:39:41.853858 1779 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-cilium-run\") on node \"10.0.0.125\" DevicePath \"\"" Dec 13 01:39:41.853893 kubelet[1779]: I1213 01:39:41.853886 1779 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-cilium-cgroup\") on node \"10.0.0.125\" DevicePath \"\"" Dec 13 01:39:41.853893 kubelet[1779]: I1213 01:39:41.853894 1779 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-bpf-maps\") on node \"10.0.0.125\" DevicePath \"\"" Dec 13 01:39:41.853893 kubelet[1779]: I1213 01:39:41.853902 1779 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-cni-path\") on node \"10.0.0.125\" DevicePath \"\"" Dec 13 01:39:41.853893 kubelet[1779]: I1213 01:39:41.853910 1779 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-xtables-lock\") on node \"10.0.0.125\" DevicePath \"\"" Dec 13 01:39:41.853893 kubelet[1779]: I1213 01:39:41.853919 1779 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-hubble-tls\") on node \"10.0.0.125\" DevicePath \"\"" Dec 13 01:39:41.853893 kubelet[1779]: I1213 01:39:41.853927 1779 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-68g59\" (UniqueName: \"kubernetes.io/projected/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-kube-api-access-68g59\") on node \"10.0.0.125\" DevicePath \"\"" Dec 13 01:39:41.854443 kubelet[1779]: I1213 01:39:41.853936 1779 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-cilium-config-path\") on node \"10.0.0.125\" DevicePath \"\"" Dec 13 01:39:41.854443 kubelet[1779]: I1213 01:39:41.853946 1779 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-etc-cni-netd\") on node \"10.0.0.125\" DevicePath \"\"" Dec 13 01:39:41.854443 kubelet[1779]: I1213 01:39:41.853953 1779 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-host-proc-sys-net\") on node \"10.0.0.125\" DevicePath \"\"" Dec 13 01:39:41.854443 kubelet[1779]: I1213 01:39:41.853961 1779 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64dcec63-1870-49dc-96fc-07ccc1fe4fbe-lib-modules\") on node \"10.0.0.125\" DevicePath \"\"" Dec 13 01:39:42.044881 systemd[1]: var-lib-kubelet-pods-64dcec63\x2d1870\x2d49dc\x2d96fc\x2d07ccc1fe4fbe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d68g59.mount: Deactivated successfully. Dec 13 01:39:42.045036 systemd[1]: var-lib-kubelet-pods-64dcec63\x2d1870\x2d49dc\x2d96fc\x2d07ccc1fe4fbe-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:39:42.188069 kubelet[1779]: I1213 01:39:42.188035 1779 scope.go:117] "RemoveContainer" containerID="898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb" Dec 13 01:39:42.189122 containerd[1476]: time="2024-12-13T01:39:42.189073597Z" level=info msg="RemoveContainer for \"898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb\"" Dec 13 01:39:42.193266 systemd[1]: Removed slice kubepods-burstable-pod64dcec63_1870_49dc_96fc_07ccc1fe4fbe.slice - libcontainer container kubepods-burstable-pod64dcec63_1870_49dc_96fc_07ccc1fe4fbe.slice. Dec 13 01:39:42.193440 systemd[1]: kubepods-burstable-pod64dcec63_1870_49dc_96fc_07ccc1fe4fbe.slice: Consumed 7.748s CPU time. Dec 13 01:39:42.304429 containerd[1476]: time="2024-12-13T01:39:42.304300069Z" level=info msg="RemoveContainer for \"898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb\" returns successfully" Dec 13 01:39:42.304806 kubelet[1779]: I1213 01:39:42.304629 1779 scope.go:117] "RemoveContainer" containerID="d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e" Dec 13 01:39:42.305690 containerd[1476]: time="2024-12-13T01:39:42.305640337Z" level=info msg="RemoveContainer for \"d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e\"" Dec 13 01:39:42.451419 containerd[1476]: time="2024-12-13T01:39:42.451362908Z" level=info msg="RemoveContainer for \"d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e\" returns successfully" Dec 13 01:39:42.451745 kubelet[1779]: I1213 01:39:42.451710 1779 scope.go:117] "RemoveContainer" containerID="2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49" Dec 13 01:39:42.453190 containerd[1476]: time="2024-12-13T01:39:42.453118817Z" level=info msg="RemoveContainer for \"2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49\"" Dec 13 01:39:42.731156 containerd[1476]: time="2024-12-13T01:39:42.731093043Z" level=info msg="RemoveContainer for \"2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49\" returns successfully" Dec 13 01:39:42.731443 kubelet[1779]: I1213 01:39:42.731415 1779 scope.go:117] "RemoveContainer" containerID="b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1" Dec 13 01:39:42.732810 containerd[1476]: time="2024-12-13T01:39:42.732787225Z" level=info msg="RemoveContainer for \"b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1\"" Dec 13 01:39:42.738500 kubelet[1779]: E1213 01:39:42.738464 1779 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:39:42.787574 kubelet[1779]: E1213 01:39:42.787508 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:42.806861 containerd[1476]: time="2024-12-13T01:39:42.806815583Z" level=info msg="RemoveContainer for \"b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1\" returns successfully" Dec 13 01:39:42.807121 kubelet[1779]: I1213 01:39:42.807098 1779 scope.go:117] "RemoveContainer" containerID="8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e" Dec 13 01:39:42.808352 containerd[1476]: time="2024-12-13T01:39:42.808311774Z" level=info msg="RemoveContainer for \"8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e\"" Dec 13 01:39:42.822727 containerd[1476]: time="2024-12-13T01:39:42.822671590Z" level=info msg="RemoveContainer for \"8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e\" returns successfully" Dec 13 01:39:42.823030 kubelet[1779]: I1213 01:39:42.822935 1779 scope.go:117] "RemoveContainer" containerID="898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb" Dec 13 01:39:42.823312 containerd[1476]: time="2024-12-13T01:39:42.823244767Z" level=error msg="ContainerStatus for \"898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb\": not found" Dec 13 01:39:42.823462 kubelet[1779]: E1213 01:39:42.823427 1779 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb\": not found" containerID="898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb" Dec 13 01:39:42.823569 kubelet[1779]: I1213 01:39:42.823472 1779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb"} err="failed to get container status \"898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb\": rpc error: code = NotFound desc = an error occurred when try to find container \"898cf1cfdb97eadc91cd6ef4ee7739ef467958616b17da8398dd348549422dfb\": not found" Dec 13 01:39:42.823594 kubelet[1779]: I1213 01:39:42.823569 1779 scope.go:117] "RemoveContainer" containerID="d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e" Dec 13 01:39:42.823774 containerd[1476]: time="2024-12-13T01:39:42.823740558Z" level=error msg="ContainerStatus for \"d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e\": not found" Dec 13 01:39:42.823854 kubelet[1779]: E1213 01:39:42.823835 1779 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e\": not found" containerID="d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e" Dec 13 01:39:42.823899 kubelet[1779]: I1213 01:39:42.823850 1779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e"} err="failed to get container status \"d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7f734890afb2d5545fef88201a5d3ece1eecaa8c08b6f5826fe67d7b231d90e\": not found" Dec 13 01:39:42.823899 kubelet[1779]: I1213 01:39:42.823863 1779 scope.go:117] "RemoveContainer" containerID="2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49" Dec 13 01:39:42.824074 containerd[1476]: time="2024-12-13T01:39:42.824042585Z" level=error msg="ContainerStatus for \"2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49\": not found" Dec 13 01:39:42.824193 kubelet[1779]: E1213 01:39:42.824169 1779 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49\": not found" containerID="2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49" Dec 13 01:39:42.824248 kubelet[1779]: I1213 01:39:42.824189 1779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49"} err="failed to get container status \"2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49\": rpc error: code = NotFound desc = an error occurred when try to find container \"2baf5f9222186f85f086666e985440f3ca96f8ae5b7c059a157bd6e604d1ed49\": not found" Dec 13 01:39:42.824248 kubelet[1779]: I1213 01:39:42.824205 1779 scope.go:117] "RemoveContainer" containerID="b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1" Dec 13 01:39:42.824392 containerd[1476]: time="2024-12-13T01:39:42.824351776Z" level=error msg="ContainerStatus for \"b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1\": not found" Dec 13 01:39:42.824468 kubelet[1779]: E1213 01:39:42.824428 1779 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1\": not found" containerID="b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1" Dec 13 01:39:42.824511 kubelet[1779]: I1213 01:39:42.824464 1779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1"} err="failed to get container status \"b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3e68e8cc21f52afa20c1e51b1f657058a2f4e60c6dd10c54b2a3cdc0e8adac1\": not found" Dec 13 01:39:42.824511 kubelet[1779]: I1213 01:39:42.824483 1779 scope.go:117] "RemoveContainer" containerID="8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e" Dec 13 01:39:42.824678 containerd[1476]: time="2024-12-13T01:39:42.824632834Z" level=error msg="ContainerStatus for \"8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e\": not found" Dec 13 01:39:42.824786 kubelet[1779]: E1213 01:39:42.824760 1779 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e\": not found" containerID="8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e" Dec 13 01:39:42.824828 kubelet[1779]: I1213 01:39:42.824785 1779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e"} err="failed to get container status \"8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e\": rpc error: code = NotFound desc = an error occurred when try to find container \"8bc9f68ae44adb1870d5a275b9dacf4890e3bde49594a5509ec4382985ef932e\": not found" Dec 13 01:39:43.436722 kubelet[1779]: I1213 01:39:43.436650 1779 topology_manager.go:215] "Topology Admit Handler" podUID="c63c72be-9ddd-400e-a603-508ddaea9376" podNamespace="kube-system" podName="cilium-operator-599987898-qz99c" Dec 13 01:39:43.436917 kubelet[1779]: E1213 01:39:43.436742 1779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64dcec63-1870-49dc-96fc-07ccc1fe4fbe" containerName="apply-sysctl-overwrites" Dec 13 01:39:43.436917 kubelet[1779]: E1213 01:39:43.436758 1779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64dcec63-1870-49dc-96fc-07ccc1fe4fbe" containerName="clean-cilium-state" Dec 13 01:39:43.436917 kubelet[1779]: E1213 01:39:43.436770 1779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64dcec63-1870-49dc-96fc-07ccc1fe4fbe" containerName="mount-cgroup" Dec 13 01:39:43.436917 kubelet[1779]: E1213 01:39:43.436779 1779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64dcec63-1870-49dc-96fc-07ccc1fe4fbe" containerName="mount-bpf-fs" Dec 13 01:39:43.436917 kubelet[1779]: E1213 01:39:43.436788 1779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64dcec63-1870-49dc-96fc-07ccc1fe4fbe" containerName="cilium-agent" Dec 13 01:39:43.436917 kubelet[1779]: I1213 01:39:43.436816 1779 memory_manager.go:354] "RemoveStaleState removing state" podUID="64dcec63-1870-49dc-96fc-07ccc1fe4fbe" containerName="cilium-agent" Dec 13 01:39:43.441152 kubelet[1779]: I1213 01:39:43.440592 1779 topology_manager.go:215] "Topology Admit Handler" podUID="ce395074-b427-4b04-8597-fe8ce9523ec8" podNamespace="kube-system" podName="cilium-4mjb9" Dec 13 01:39:43.445871 systemd[1]: Created slice kubepods-besteffort-podc63c72be_9ddd_400e_a603_508ddaea9376.slice - libcontainer container kubepods-besteffort-podc63c72be_9ddd_400e_a603_508ddaea9376.slice. Dec 13 01:39:43.452927 systemd[1]: Created slice kubepods-burstable-podce395074_b427_4b04_8597_fe8ce9523ec8.slice - libcontainer container kubepods-burstable-podce395074_b427_4b04_8597_fe8ce9523ec8.slice. Dec 13 01:39:43.563453 kubelet[1779]: I1213 01:39:43.563391 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce395074-b427-4b04-8597-fe8ce9523ec8-xtables-lock\") pod \"cilium-4mjb9\" (UID: \"ce395074-b427-4b04-8597-fe8ce9523ec8\") " pod="kube-system/cilium-4mjb9" Dec 13 01:39:43.563453 kubelet[1779]: I1213 01:39:43.563442 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce395074-b427-4b04-8597-fe8ce9523ec8-host-proc-sys-net\") pod \"cilium-4mjb9\" (UID: \"ce395074-b427-4b04-8597-fe8ce9523ec8\") " pod="kube-system/cilium-4mjb9" Dec 13 01:39:43.563453 kubelet[1779]: I1213 01:39:43.563460 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29slz\" (UniqueName: \"kubernetes.io/projected/ce395074-b427-4b04-8597-fe8ce9523ec8-kube-api-access-29slz\") pod \"cilium-4mjb9\" (UID: \"ce395074-b427-4b04-8597-fe8ce9523ec8\") " pod="kube-system/cilium-4mjb9" Dec 13 01:39:43.563716 kubelet[1779]: I1213 01:39:43.563503 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce395074-b427-4b04-8597-fe8ce9523ec8-cni-path\") pod \"cilium-4mjb9\" (UID: \"ce395074-b427-4b04-8597-fe8ce9523ec8\") " pod="kube-system/cilium-4mjb9" Dec 13 01:39:43.563716 kubelet[1779]: I1213 01:39:43.563555 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce395074-b427-4b04-8597-fe8ce9523ec8-lib-modules\") pod \"cilium-4mjb9\" (UID: \"ce395074-b427-4b04-8597-fe8ce9523ec8\") " pod="kube-system/cilium-4mjb9" Dec 13 01:39:43.563716 kubelet[1779]: I1213 01:39:43.563581 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce395074-b427-4b04-8597-fe8ce9523ec8-etc-cni-netd\") pod \"cilium-4mjb9\" (UID: \"ce395074-b427-4b04-8597-fe8ce9523ec8\") " pod="kube-system/cilium-4mjb9" Dec 13 01:39:43.563716 kubelet[1779]: I1213 01:39:43.563608 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce395074-b427-4b04-8597-fe8ce9523ec8-cilium-config-path\") pod \"cilium-4mjb9\" (UID: \"ce395074-b427-4b04-8597-fe8ce9523ec8\") " pod="kube-system/cilium-4mjb9" Dec 13 01:39:43.563716 kubelet[1779]: I1213 01:39:43.563668 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77n4m\" (UniqueName: \"kubernetes.io/projected/c63c72be-9ddd-400e-a603-508ddaea9376-kube-api-access-77n4m\") pod \"cilium-operator-599987898-qz99c\" (UID: \"c63c72be-9ddd-400e-a603-508ddaea9376\") " pod="kube-system/cilium-operator-599987898-qz99c" Dec 13 01:39:43.563840 kubelet[1779]: I1213 01:39:43.563688 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce395074-b427-4b04-8597-fe8ce9523ec8-bpf-maps\") pod \"cilium-4mjb9\" (UID: \"ce395074-b427-4b04-8597-fe8ce9523ec8\") " pod="kube-system/cilium-4mjb9" Dec 13 01:39:43.563840 kubelet[1779]: I1213 01:39:43.563704 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce395074-b427-4b04-8597-fe8ce9523ec8-hostproc\") pod \"cilium-4mjb9\" (UID: \"ce395074-b427-4b04-8597-fe8ce9523ec8\") " pod="kube-system/cilium-4mjb9" Dec 13 01:39:43.563840 kubelet[1779]: I1213 01:39:43.563717 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce395074-b427-4b04-8597-fe8ce9523ec8-cilium-cgroup\") pod \"cilium-4mjb9\" (UID: \"ce395074-b427-4b04-8597-fe8ce9523ec8\") " pod="kube-system/cilium-4mjb9" Dec 13 01:39:43.563840 kubelet[1779]: I1213 01:39:43.563730 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ce395074-b427-4b04-8597-fe8ce9523ec8-cilium-ipsec-secrets\") pod \"cilium-4mjb9\" (UID: \"ce395074-b427-4b04-8597-fe8ce9523ec8\") " pod="kube-system/cilium-4mjb9" Dec 13 01:39:43.563840 kubelet[1779]: I1213 01:39:43.563745 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce395074-b427-4b04-8597-fe8ce9523ec8-host-proc-sys-kernel\") pod \"cilium-4mjb9\" (UID: \"ce395074-b427-4b04-8597-fe8ce9523ec8\") " pod="kube-system/cilium-4mjb9" Dec 13 01:39:43.564023 kubelet[1779]: I1213 01:39:43.563773 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c63c72be-9ddd-400e-a603-508ddaea9376-cilium-config-path\") pod \"cilium-operator-599987898-qz99c\" (UID: \"c63c72be-9ddd-400e-a603-508ddaea9376\") " pod="kube-system/cilium-operator-599987898-qz99c" Dec 13 01:39:43.564023 kubelet[1779]: I1213 01:39:43.563788 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce395074-b427-4b04-8597-fe8ce9523ec8-cilium-run\") pod \"cilium-4mjb9\" (UID: \"ce395074-b427-4b04-8597-fe8ce9523ec8\") " pod="kube-system/cilium-4mjb9" Dec 13 01:39:43.564023 kubelet[1779]: I1213 01:39:43.563802 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce395074-b427-4b04-8597-fe8ce9523ec8-hubble-tls\") pod \"cilium-4mjb9\" (UID: \"ce395074-b427-4b04-8597-fe8ce9523ec8\") " pod="kube-system/cilium-4mjb9" Dec 13 01:39:43.564023 kubelet[1779]: I1213 01:39:43.563815 1779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce395074-b427-4b04-8597-fe8ce9523ec8-clustermesh-secrets\") pod \"cilium-4mjb9\" (UID: \"ce395074-b427-4b04-8597-fe8ce9523ec8\") " pod="kube-system/cilium-4mjb9" Dec 13 01:39:43.724509 kubelet[1779]: I1213 01:39:43.724397 1779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64dcec63-1870-49dc-96fc-07ccc1fe4fbe" path="/var/lib/kubelet/pods/64dcec63-1870-49dc-96fc-07ccc1fe4fbe/volumes" Dec 13 01:39:43.749505 kubelet[1779]: E1213 01:39:43.749456 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:39:43.750065 containerd[1476]: time="2024-12-13T01:39:43.749991351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-qz99c,Uid:c63c72be-9ddd-400e-a603-508ddaea9376,Namespace:kube-system,Attempt:0,}" Dec 13 01:39:43.766687 kubelet[1779]: E1213 01:39:43.766658 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:39:43.767178 containerd[1476]: time="2024-12-13T01:39:43.767126137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4mjb9,Uid:ce395074-b427-4b04-8597-fe8ce9523ec8,Namespace:kube-system,Attempt:0,}" Dec 13 01:39:43.788787 kubelet[1779]: E1213 01:39:43.788704 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:44.227126 containerd[1476]: time="2024-12-13T01:39:44.227001563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:39:44.227126 containerd[1476]: time="2024-12-13T01:39:44.227090189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:39:44.227126 containerd[1476]: time="2024-12-13T01:39:44.227107522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:39:44.227371 containerd[1476]: time="2024-12-13T01:39:44.227253086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:39:44.248456 systemd[1]: Started cri-containerd-dfddfd4e63df0d686ce2e1b303b5db7190249ab44604bb5be13dcb6f41cbd2e4.scope - libcontainer container dfddfd4e63df0d686ce2e1b303b5db7190249ab44604bb5be13dcb6f41cbd2e4. Dec 13 01:39:44.249956 containerd[1476]: time="2024-12-13T01:39:44.249775627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:39:44.251410 containerd[1476]: time="2024-12-13T01:39:44.251170036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:39:44.251410 containerd[1476]: time="2024-12-13T01:39:44.251209651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:39:44.251410 containerd[1476]: time="2024-12-13T01:39:44.251332671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:39:44.273785 systemd[1]: Started cri-containerd-25a330011af12af7862cbbed8ad523e7824597ac48bae3d187c350fd0be9e1e4.scope - libcontainer container 25a330011af12af7862cbbed8ad523e7824597ac48bae3d187c350fd0be9e1e4. Dec 13 01:39:44.296776 containerd[1476]: time="2024-12-13T01:39:44.296726390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-qz99c,Uid:c63c72be-9ddd-400e-a603-508ddaea9376,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfddfd4e63df0d686ce2e1b303b5db7190249ab44604bb5be13dcb6f41cbd2e4\"" Dec 13 01:39:44.297619 kubelet[1779]: E1213 01:39:44.297552 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:39:44.299165 containerd[1476]: time="2024-12-13T01:39:44.299016841Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:39:44.305580 containerd[1476]: time="2024-12-13T01:39:44.305190386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4mjb9,Uid:ce395074-b427-4b04-8597-fe8ce9523ec8,Namespace:kube-system,Attempt:0,} returns sandbox id \"25a330011af12af7862cbbed8ad523e7824597ac48bae3d187c350fd0be9e1e4\"" Dec 13 01:39:44.306058 kubelet[1779]: E1213 01:39:44.306025 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:39:44.308781 containerd[1476]: time="2024-12-13T01:39:44.308735705Z" level=info msg="CreateContainer within sandbox \"25a330011af12af7862cbbed8ad523e7824597ac48bae3d187c350fd0be9e1e4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:39:44.749542 containerd[1476]: time="2024-12-13T01:39:44.749474544Z" level=info msg="CreateContainer within sandbox \"25a330011af12af7862cbbed8ad523e7824597ac48bae3d187c350fd0be9e1e4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7af02fb6b6fcea84e55a2f117680c006fb00c19f013ba4e7757dd89076974dcc\"" Dec 13 01:39:44.750029 containerd[1476]: time="2024-12-13T01:39:44.750010962Z" level=info msg="StartContainer for \"7af02fb6b6fcea84e55a2f117680c006fb00c19f013ba4e7757dd89076974dcc\"" Dec 13 01:39:44.784401 systemd[1]: Started cri-containerd-7af02fb6b6fcea84e55a2f117680c006fb00c19f013ba4e7757dd89076974dcc.scope - libcontainer container 7af02fb6b6fcea84e55a2f117680c006fb00c19f013ba4e7757dd89076974dcc. Dec 13 01:39:44.789020 kubelet[1779]: E1213 01:39:44.788991 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:44.815070 containerd[1476]: time="2024-12-13T01:39:44.814993151Z" level=info msg="StartContainer for \"7af02fb6b6fcea84e55a2f117680c006fb00c19f013ba4e7757dd89076974dcc\" returns successfully" Dec 13 01:39:44.824892 systemd[1]: cri-containerd-7af02fb6b6fcea84e55a2f117680c006fb00c19f013ba4e7757dd89076974dcc.scope: Deactivated successfully. Dec 13 01:39:44.851060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7af02fb6b6fcea84e55a2f117680c006fb00c19f013ba4e7757dd89076974dcc-rootfs.mount: Deactivated successfully. Dec 13 01:39:44.873061 containerd[1476]: time="2024-12-13T01:39:44.872954013Z" level=info msg="shim disconnected" id=7af02fb6b6fcea84e55a2f117680c006fb00c19f013ba4e7757dd89076974dcc namespace=k8s.io Dec 13 01:39:44.873061 containerd[1476]: time="2024-12-13T01:39:44.873024054Z" level=warning msg="cleaning up after shim disconnected" id=7af02fb6b6fcea84e55a2f117680c006fb00c19f013ba4e7757dd89076974dcc namespace=k8s.io Dec 13 01:39:44.873061 containerd[1476]: time="2024-12-13T01:39:44.873038351Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:39:45.193956 kubelet[1779]: E1213 01:39:45.193918 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:39:45.195945 containerd[1476]: time="2024-12-13T01:39:45.195904651Z" level=info msg="CreateContainer within sandbox \"25a330011af12af7862cbbed8ad523e7824597ac48bae3d187c350fd0be9e1e4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:39:45.790036 kubelet[1779]: E1213 01:39:45.789964 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:45.905357 containerd[1476]: time="2024-12-13T01:39:45.905263615Z" level=info msg="CreateContainer within sandbox \"25a330011af12af7862cbbed8ad523e7824597ac48bae3d187c350fd0be9e1e4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3e837def9316c80179ce04e0312d3a22475de281f7ead91cf2742bf7ed76f5d6\"" Dec 13 01:39:45.906085 containerd[1476]: time="2024-12-13T01:39:45.906038099Z" level=info msg="StartContainer for \"3e837def9316c80179ce04e0312d3a22475de281f7ead91cf2742bf7ed76f5d6\"" Dec 13 01:39:45.949487 systemd[1]: Started cri-containerd-3e837def9316c80179ce04e0312d3a22475de281f7ead91cf2742bf7ed76f5d6.scope - libcontainer container 3e837def9316c80179ce04e0312d3a22475de281f7ead91cf2742bf7ed76f5d6. Dec 13 01:39:45.987676 containerd[1476]: time="2024-12-13T01:39:45.987576639Z" level=info msg="StartContainer for \"3e837def9316c80179ce04e0312d3a22475de281f7ead91cf2742bf7ed76f5d6\" returns successfully" Dec 13 01:39:45.992372 systemd[1]: cri-containerd-3e837def9316c80179ce04e0312d3a22475de281f7ead91cf2742bf7ed76f5d6.scope: Deactivated successfully. Dec 13 01:39:46.028041 containerd[1476]: time="2024-12-13T01:39:46.027960405Z" level=info msg="shim disconnected" id=3e837def9316c80179ce04e0312d3a22475de281f7ead91cf2742bf7ed76f5d6 namespace=k8s.io Dec 13 01:39:46.028041 containerd[1476]: time="2024-12-13T01:39:46.028030537Z" level=warning msg="cleaning up after shim disconnected" id=3e837def9316c80179ce04e0312d3a22475de281f7ead91cf2742bf7ed76f5d6 namespace=k8s.io Dec 13 01:39:46.028041 containerd[1476]: time="2024-12-13T01:39:46.028041297Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:39:46.199200 kubelet[1779]: E1213 01:39:46.199125 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:39:46.201100 containerd[1476]: time="2024-12-13T01:39:46.201037157Z" level=info msg="CreateContainer within sandbox \"25a330011af12af7862cbbed8ad523e7824597ac48bae3d187c350fd0be9e1e4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:39:46.220214 containerd[1476]: time="2024-12-13T01:39:46.220156480Z" level=info msg="CreateContainer within sandbox \"25a330011af12af7862cbbed8ad523e7824597ac48bae3d187c350fd0be9e1e4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"83589a63c511ace557d91c6739437d105e0b6c66cc82aab38348678d7dc2fa51\"" Dec 13 01:39:46.220740 containerd[1476]: time="2024-12-13T01:39:46.220717414Z" level=info msg="StartContainer for \"83589a63c511ace557d91c6739437d105e0b6c66cc82aab38348678d7dc2fa51\"" Dec 13 01:39:46.254400 systemd[1]: Started cri-containerd-83589a63c511ace557d91c6739437d105e0b6c66cc82aab38348678d7dc2fa51.scope - libcontainer container 83589a63c511ace557d91c6739437d105e0b6c66cc82aab38348678d7dc2fa51. Dec 13 01:39:46.288129 containerd[1476]: time="2024-12-13T01:39:46.288067436Z" level=info msg="StartContainer for \"83589a63c511ace557d91c6739437d105e0b6c66cc82aab38348678d7dc2fa51\" returns successfully" Dec 13 01:39:46.289263 systemd[1]: cri-containerd-83589a63c511ace557d91c6739437d105e0b6c66cc82aab38348678d7dc2fa51.scope: Deactivated successfully. Dec 13 01:39:46.372797 containerd[1476]: time="2024-12-13T01:39:46.372719379Z" level=info msg="shim disconnected" id=83589a63c511ace557d91c6739437d105e0b6c66cc82aab38348678d7dc2fa51 namespace=k8s.io Dec 13 01:39:46.372797 containerd[1476]: time="2024-12-13T01:39:46.372785473Z" level=warning msg="cleaning up after shim disconnected" id=83589a63c511ace557d91c6739437d105e0b6c66cc82aab38348678d7dc2fa51 namespace=k8s.io Dec 13 01:39:46.372797 containerd[1476]: time="2024-12-13T01:39:46.372796884Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:39:46.387648 containerd[1476]: time="2024-12-13T01:39:46.387576909Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:39:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:39:46.741270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e837def9316c80179ce04e0312d3a22475de281f7ead91cf2742bf7ed76f5d6-rootfs.mount: Deactivated successfully. Dec 13 01:39:46.791127 kubelet[1779]: E1213 01:39:46.790991 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:47.202594 kubelet[1779]: E1213 01:39:47.202558 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:39:47.204711 containerd[1476]: time="2024-12-13T01:39:47.204665633Z" level=info msg="CreateContainer within sandbox \"25a330011af12af7862cbbed8ad523e7824597ac48bae3d187c350fd0be9e1e4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:39:47.305251 containerd[1476]: time="2024-12-13T01:39:47.305199955Z" level=info msg="CreateContainer within sandbox \"25a330011af12af7862cbbed8ad523e7824597ac48bae3d187c350fd0be9e1e4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"118ab92c11b524bbb871ad0488c444d2629bda51f314b98594eca596ca1aad50\"" Dec 13 01:39:47.305842 containerd[1476]: time="2024-12-13T01:39:47.305811564Z" level=info msg="StartContainer for \"118ab92c11b524bbb871ad0488c444d2629bda51f314b98594eca596ca1aad50\"" Dec 13 01:39:47.338355 systemd[1]: Started cri-containerd-118ab92c11b524bbb871ad0488c444d2629bda51f314b98594eca596ca1aad50.scope - libcontainer container 118ab92c11b524bbb871ad0488c444d2629bda51f314b98594eca596ca1aad50. Dec 13 01:39:47.365723 systemd[1]: cri-containerd-118ab92c11b524bbb871ad0488c444d2629bda51f314b98594eca596ca1aad50.scope: Deactivated successfully. Dec 13 01:39:47.369810 containerd[1476]: time="2024-12-13T01:39:47.367586497Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce395074_b427_4b04_8597_fe8ce9523ec8.slice/cri-containerd-118ab92c11b524bbb871ad0488c444d2629bda51f314b98594eca596ca1aad50.scope/memory.events\": no such file or directory" Dec 13 01:39:47.372981 containerd[1476]: time="2024-12-13T01:39:47.372935231Z" level=info msg="StartContainer for \"118ab92c11b524bbb871ad0488c444d2629bda51f314b98594eca596ca1aad50\" returns successfully" Dec 13 01:39:47.412158 containerd[1476]: time="2024-12-13T01:39:47.412058061Z" level=info msg="shim disconnected" id=118ab92c11b524bbb871ad0488c444d2629bda51f314b98594eca596ca1aad50 namespace=k8s.io Dec 13 01:39:47.412419 containerd[1476]: time="2024-12-13T01:39:47.412129445Z" level=warning msg="cleaning up after shim disconnected" id=118ab92c11b524bbb871ad0488c444d2629bda51f314b98594eca596ca1aad50 namespace=k8s.io Dec 13 01:39:47.412419 containerd[1476]: time="2024-12-13T01:39:47.412288033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:39:47.687741 containerd[1476]: time="2024-12-13T01:39:47.687664865Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:39:47.688740 containerd[1476]: time="2024-12-13T01:39:47.688648552Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907221" Dec 13 01:39:47.690226 containerd[1476]: time="2024-12-13T01:39:47.690091742Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:39:47.695155 containerd[1476]: time="2024-12-13T01:39:47.692860010Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.393798816s" Dec 13 01:39:47.695155 containerd[1476]: time="2024-12-13T01:39:47.692918329Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 01:39:47.698118 containerd[1476]: time="2024-12-13T01:39:47.698062649Z" level=info msg="CreateContainer within sandbox \"dfddfd4e63df0d686ce2e1b303b5db7190249ab44604bb5be13dcb6f41cbd2e4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:39:47.713763 containerd[1476]: time="2024-12-13T01:39:47.713694301Z" level=info msg="CreateContainer within sandbox \"dfddfd4e63df0d686ce2e1b303b5db7190249ab44604bb5be13dcb6f41cbd2e4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3addfd7d300df925f092ded8559c112a9927cb2a6e87222e0b63494422a27250\"" Dec 13 01:39:47.714337 containerd[1476]: time="2024-12-13T01:39:47.714308654Z" level=info msg="StartContainer for \"3addfd7d300df925f092ded8559c112a9927cb2a6e87222e0b63494422a27250\"" Dec 13 01:39:47.739223 kubelet[1779]: E1213 01:39:47.739129 1779 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:39:47.742852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-118ab92c11b524bbb871ad0488c444d2629bda51f314b98594eca596ca1aad50-rootfs.mount: Deactivated successfully. Dec 13 01:39:47.756282 systemd[1]: Started cri-containerd-3addfd7d300df925f092ded8559c112a9927cb2a6e87222e0b63494422a27250.scope - libcontainer container 3addfd7d300df925f092ded8559c112a9927cb2a6e87222e0b63494422a27250. Dec 13 01:39:47.787113 containerd[1476]: time="2024-12-13T01:39:47.787033609Z" level=info msg="StartContainer for \"3addfd7d300df925f092ded8559c112a9927cb2a6e87222e0b63494422a27250\" returns successfully" Dec 13 01:39:47.791263 kubelet[1779]: E1213 01:39:47.791200 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:48.206477 kubelet[1779]: E1213 01:39:48.206445 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:39:48.208184 kubelet[1779]: E1213 01:39:48.208130 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:39:48.208728 containerd[1476]: time="2024-12-13T01:39:48.208683868Z" level=info msg="CreateContainer within sandbox \"25a330011af12af7862cbbed8ad523e7824597ac48bae3d187c350fd0be9e1e4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:39:48.606092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2546838430.mount: Deactivated successfully. Dec 13 01:39:48.643836 containerd[1476]: time="2024-12-13T01:39:48.643750644Z" level=info msg="CreateContainer within sandbox \"25a330011af12af7862cbbed8ad523e7824597ac48bae3d187c350fd0be9e1e4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ef7ef5fcfeecaec3e980bf0f551664c2839d8c97f0b49c234f31213b9cd3d53c\"" Dec 13 01:39:48.644622 containerd[1476]: time="2024-12-13T01:39:48.644575964Z" level=info msg="StartContainer for \"ef7ef5fcfeecaec3e980bf0f551664c2839d8c97f0b49c234f31213b9cd3d53c\"" Dec 13 01:39:48.679352 systemd[1]: Started cri-containerd-ef7ef5fcfeecaec3e980bf0f551664c2839d8c97f0b49c234f31213b9cd3d53c.scope - libcontainer container ef7ef5fcfeecaec3e980bf0f551664c2839d8c97f0b49c234f31213b9cd3d53c. Dec 13 01:39:48.791542 kubelet[1779]: E1213 01:39:48.791497 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:48.796337 containerd[1476]: time="2024-12-13T01:39:48.796270988Z" level=info msg="StartContainer for \"ef7ef5fcfeecaec3e980bf0f551664c2839d8c97f0b49c234f31213b9cd3d53c\" returns successfully" Dec 13 01:39:49.196173 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 01:39:49.214121 kubelet[1779]: E1213 01:39:49.214085 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:39:49.214656 kubelet[1779]: E1213 01:39:49.214631 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:39:49.235400 kubelet[1779]: I1213 01:39:49.235333 1779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4mjb9" podStartSLOduration=6.235315922 podStartE2EDuration="6.235315922s" podCreationTimestamp="2024-12-13 01:39:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:39:49.235044983 +0000 UTC m=+73.248732082" watchObservedRunningTime="2024-12-13 01:39:49.235315922 +0000 UTC m=+73.249003021" Dec 13 01:39:49.235626 kubelet[1779]: I1213 01:39:49.235497 1779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-qz99c" podStartSLOduration=2.83810029 podStartE2EDuration="6.235493305s" podCreationTimestamp="2024-12-13 01:39:43 +0000 UTC" firstStartedPulling="2024-12-13 01:39:44.298699605 +0000 UTC m=+68.312386704" lastFinishedPulling="2024-12-13 01:39:47.69609262 +0000 UTC m=+71.709779719" observedRunningTime="2024-12-13 01:39:48.584832782 +0000 UTC m=+72.598519882" watchObservedRunningTime="2024-12-13 01:39:49.235493305 +0000 UTC m=+73.249180404" Dec 13 01:39:49.307423 kubelet[1779]: I1213 01:39:49.307127 1779 setters.go:580] "Node became not ready" node="10.0.0.125" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:39:49Z","lastTransitionTime":"2024-12-13T01:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:39:49.792642 kubelet[1779]: E1213 01:39:49.792571 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:50.216110 kubelet[1779]: E1213 01:39:50.216072 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:39:50.793843 kubelet[1779]: E1213 01:39:50.793567 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:51.794430 kubelet[1779]: E1213 01:39:51.794357 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:52.729744 systemd-networkd[1409]: lxc_health: Link UP Dec 13 01:39:52.740358 systemd-networkd[1409]: lxc_health: Gained carrier Dec 13 01:39:52.795601 kubelet[1779]: E1213 01:39:52.795481 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:53.769610 kubelet[1779]: E1213 01:39:53.769488 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:39:53.796335 kubelet[1779]: E1213 01:39:53.796271 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:53.975207 systemd-networkd[1409]: lxc_health: Gained IPv6LL Dec 13 01:39:54.223993 kubelet[1779]: E1213 01:39:54.223929 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:39:54.797465 kubelet[1779]: E1213 01:39:54.797418 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:55.226030 kubelet[1779]: E1213 01:39:55.225978 1779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:39:55.798275 kubelet[1779]: E1213 01:39:55.798181 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:56.739490 kubelet[1779]: E1213 01:39:56.739431 1779 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:56.799349 kubelet[1779]: E1213 01:39:56.799303 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:57.800010 kubelet[1779]: E1213 01:39:57.799962 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:58.623721 systemd[1]: run-containerd-runc-k8s.io-ef7ef5fcfeecaec3e980bf0f551664c2839d8c97f0b49c234f31213b9cd3d53c-runc.rOl0Wk.mount: Deactivated successfully. Dec 13 01:39:58.800562 kubelet[1779]: E1213 01:39:58.800505 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:39:59.801623 kubelet[1779]: E1213 01:39:59.801552 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:40:00.801957 kubelet[1779]: E1213 01:40:00.801889 1779 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"