Feb 13 16:15:39.235421 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 16:15:39.235466 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 16:15:39.235487 kernel: BIOS-provided physical RAM map: Feb 13 16:15:39.235499 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 16:15:39.235510 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 16:15:39.235522 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 16:15:39.235536 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Feb 13 16:15:39.235548 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Feb 13 16:15:39.235558 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 16:15:39.235573 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 16:15:39.235586 kernel: NX (Execute Disable) protection: active Feb 13 16:15:39.235597 kernel: APIC: Static calls initialized Feb 13 16:15:39.235619 kernel: SMBIOS 2.8 present. Feb 13 16:15:39.235632 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Feb 13 16:15:39.235645 kernel: Hypervisor detected: KVM Feb 13 16:15:39.235662 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 16:15:39.235679 kernel: kvm-clock: using sched offset of 4049129287 cycles Feb 13 16:15:39.235691 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 16:15:39.235704 kernel: tsc: Detected 1995.311 MHz processor Feb 13 16:15:39.235716 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 16:15:39.235728 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 16:15:39.235741 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Feb 13 16:15:39.235755 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 16:15:39.235767 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 16:15:39.235783 kernel: ACPI: Early table checksum verification disabled Feb 13 16:15:39.235794 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Feb 13 16:15:39.235805 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:15:39.235817 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:15:39.235830 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:15:39.235842 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 13 16:15:39.235853 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:15:39.235864 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:15:39.235875 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:15:39.235891 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:15:39.235901 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Feb 13 16:15:39.235911 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Feb 13 16:15:39.235922 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 13 16:15:39.235932 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Feb 13 16:15:39.235942 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Feb 13 16:15:39.235954 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Feb 13 16:15:39.235976 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Feb 13 16:15:39.235988 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 16:15:39.235999 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 16:15:39.236011 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 16:15:39.236023 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 16:15:39.236042 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Feb 13 16:15:39.236055 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Feb 13 16:15:39.236074 kernel: Zone ranges: Feb 13 16:15:39.236088 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 16:15:39.236100 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Feb 13 16:15:39.236112 kernel: Normal empty Feb 13 16:15:39.236126 kernel: Movable zone start for each node Feb 13 16:15:39.236138 kernel: Early memory node ranges Feb 13 16:15:39.236150 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 16:15:39.236163 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Feb 13 16:15:39.236175 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Feb 13 16:15:39.236195 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 16:15:39.236228 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 16:15:39.236247 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Feb 13 16:15:39.236260 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 16:15:39.236273 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 16:15:39.236286 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 16:15:39.236299 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 16:15:39.236311 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 16:15:39.236326 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 16:15:39.236344 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 16:15:39.236356 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 16:15:39.236368 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 16:15:39.236381 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 16:15:39.236393 kernel: TSC deadline timer available Feb 13 16:15:39.236405 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 16:15:39.236417 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 16:15:39.236430 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 13 16:15:39.236447 kernel: Booting paravirtualized kernel on KVM Feb 13 16:15:39.236460 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 16:15:39.236477 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 16:15:39.236494 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 16:15:39.236666 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 16:15:39.236678 kernel: pcpu-alloc: [0] 0 1 Feb 13 16:15:39.236691 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 13 16:15:39.236706 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 16:15:39.236719 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 16:15:39.236731 kernel: random: crng init done Feb 13 16:15:39.236750 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 16:15:39.236763 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 16:15:39.236779 kernel: Fallback order for Node 0: 0 Feb 13 16:15:39.236791 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Feb 13 16:15:39.236804 kernel: Policy zone: DMA32 Feb 13 16:15:39.236815 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 16:15:39.236828 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 125148K reserved, 0K cma-reserved) Feb 13 16:15:39.236840 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 16:15:39.236856 kernel: Kernel/User page tables isolation: enabled Feb 13 16:15:39.236868 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 16:15:39.236879 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 16:15:39.236890 kernel: Dynamic Preempt: voluntary Feb 13 16:15:39.236901 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 16:15:39.236915 kernel: rcu: RCU event tracing is enabled. Feb 13 16:15:39.236928 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 16:15:39.236943 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 16:15:39.236955 kernel: Rude variant of Tasks RCU enabled. Feb 13 16:15:39.236967 kernel: Tracing variant of Tasks RCU enabled. Feb 13 16:15:39.236993 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 16:15:39.237006 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 16:15:39.237019 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 16:15:39.237031 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 16:15:39.237049 kernel: Console: colour VGA+ 80x25 Feb 13 16:15:39.237062 kernel: printk: console [tty0] enabled Feb 13 16:15:39.237075 kernel: printk: console [ttyS0] enabled Feb 13 16:15:39.237086 kernel: ACPI: Core revision 20230628 Feb 13 16:15:39.237098 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 16:15:39.237123 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 16:15:39.237134 kernel: x2apic enabled Feb 13 16:15:39.237145 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 16:15:39.237157 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 16:15:39.237170 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c177478, max_idle_ns: 881590705666 ns Feb 13 16:15:39.237182 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995311) Feb 13 16:15:39.237192 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 16:15:39.237989 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 16:15:39.238039 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 16:15:39.238055 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 16:15:39.238071 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 16:15:39.238090 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 16:15:39.238106 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 13 16:15:39.238121 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 16:15:39.238137 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 16:15:39.238153 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 16:15:39.238168 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 16:15:39.238194 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 16:15:39.238235 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 16:15:39.238251 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 16:15:39.238267 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 16:15:39.238283 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 16:15:39.238298 kernel: Freeing SMP alternatives memory: 32K Feb 13 16:15:39.238314 kernel: pid_max: default: 32768 minimum: 301 Feb 13 16:15:39.238329 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 16:15:39.238349 kernel: landlock: Up and running. Feb 13 16:15:39.238364 kernel: SELinux: Initializing. Feb 13 16:15:39.238380 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 16:15:39.238396 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 16:15:39.238411 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Feb 13 16:15:39.238428 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:15:39.238443 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:15:39.238459 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:15:39.238475 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Feb 13 16:15:39.238494 kernel: signal: max sigframe size: 1776 Feb 13 16:15:39.238510 kernel: rcu: Hierarchical SRCU implementation. Feb 13 16:15:39.238528 kernel: rcu: Max phase no-delay instances is 400. Feb 13 16:15:39.238544 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 16:15:39.238560 kernel: smp: Bringing up secondary CPUs ... Feb 13 16:15:39.238575 kernel: smpboot: x86: Booting SMP configuration: Feb 13 16:15:39.238590 kernel: .... node #0, CPUs: #1 Feb 13 16:15:39.238606 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 16:15:39.238626 kernel: smpboot: Max logical packages: 1 Feb 13 16:15:39.238646 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Feb 13 16:15:39.238662 kernel: devtmpfs: initialized Feb 13 16:15:39.238677 kernel: x86/mm: Memory block size: 128MB Feb 13 16:15:39.238693 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 16:15:39.238708 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 16:15:39.238724 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 16:15:39.238740 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 16:15:39.238755 kernel: audit: initializing netlink subsys (disabled) Feb 13 16:15:39.238771 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 16:15:39.238791 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 16:15:39.238807 kernel: cpuidle: using governor menu Feb 13 16:15:39.238823 kernel: audit: type=2000 audit(1739463337.570:1): state=initialized audit_enabled=0 res=1 Feb 13 16:15:39.238839 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 16:15:39.238855 kernel: dca service started, version 1.12.1 Feb 13 16:15:39.238870 kernel: PCI: Using configuration type 1 for base access Feb 13 16:15:39.238886 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 16:15:39.238902 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 16:15:39.238918 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 16:15:39.238937 kernel: ACPI: Added _OSI(Module Device) Feb 13 16:15:39.238953 kernel: ACPI: Added _OSI(Processor Device) Feb 13 16:15:39.238969 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 16:15:39.238984 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 16:15:39.238999 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 16:15:39.239015 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 16:15:39.239031 kernel: ACPI: Interpreter enabled Feb 13 16:15:39.239046 kernel: ACPI: PM: (supports S0 S5) Feb 13 16:15:39.239061 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 16:15:39.239081 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 16:15:39.239096 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 16:15:39.239112 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 13 16:15:39.239128 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 16:15:39.242912 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 16:15:39.243116 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 16:15:39.243406 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 16:15:39.243437 kernel: acpiphp: Slot [3] registered Feb 13 16:15:39.243452 kernel: acpiphp: Slot [4] registered Feb 13 16:15:39.243465 kernel: acpiphp: Slot [5] registered Feb 13 16:15:39.243480 kernel: acpiphp: Slot [6] registered Feb 13 16:15:39.243496 kernel: acpiphp: Slot [7] registered Feb 13 16:15:39.243511 kernel: acpiphp: Slot [8] registered Feb 13 16:15:39.243529 kernel: acpiphp: Slot [9] registered Feb 13 16:15:39.243545 kernel: acpiphp: Slot [10] registered Feb 13 16:15:39.243561 kernel: acpiphp: Slot [11] registered Feb 13 16:15:39.243576 kernel: acpiphp: Slot [12] registered Feb 13 16:15:39.243597 kernel: acpiphp: Slot [13] registered Feb 13 16:15:39.243612 kernel: acpiphp: Slot [14] registered Feb 13 16:15:39.243624 kernel: acpiphp: Slot [15] registered Feb 13 16:15:39.243637 kernel: acpiphp: Slot [16] registered Feb 13 16:15:39.243651 kernel: acpiphp: Slot [17] registered Feb 13 16:15:39.243665 kernel: acpiphp: Slot [18] registered Feb 13 16:15:39.243681 kernel: acpiphp: Slot [19] registered Feb 13 16:15:39.243696 kernel: acpiphp: Slot [20] registered Feb 13 16:15:39.243711 kernel: acpiphp: Slot [21] registered Feb 13 16:15:39.243730 kernel: acpiphp: Slot [22] registered Feb 13 16:15:39.243746 kernel: acpiphp: Slot [23] registered Feb 13 16:15:39.243762 kernel: acpiphp: Slot [24] registered Feb 13 16:15:39.243777 kernel: acpiphp: Slot [25] registered Feb 13 16:15:39.243792 kernel: acpiphp: Slot [26] registered Feb 13 16:15:39.243805 kernel: acpiphp: Slot [27] registered Feb 13 16:15:39.243819 kernel: acpiphp: Slot [28] registered Feb 13 16:15:39.243832 kernel: acpiphp: Slot [29] registered Feb 13 16:15:39.243845 kernel: acpiphp: Slot [30] registered Feb 13 16:15:39.243861 kernel: acpiphp: Slot [31] registered Feb 13 16:15:39.243879 kernel: PCI host bridge to bus 0000:00 Feb 13 16:15:39.244074 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 16:15:39.244225 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 16:15:39.244365 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 16:15:39.244494 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 16:15:39.244625 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 13 16:15:39.244751 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 16:15:39.244952 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 16:15:39.245151 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 16:15:39.251054 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 13 16:15:39.251431 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Feb 13 16:15:39.251595 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 13 16:15:39.251741 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 13 16:15:39.251899 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 13 16:15:39.252041 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 13 16:15:39.254353 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 13 16:15:39.254595 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Feb 13 16:15:39.254762 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 16:15:39.254904 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 13 16:15:39.255045 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 13 16:15:39.257100 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 13 16:15:39.257408 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 13 16:15:39.257569 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 13 16:15:39.257716 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Feb 13 16:15:39.257859 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 16:15:39.258001 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 16:15:39.258191 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 16:15:39.258362 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Feb 13 16:15:39.258507 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Feb 13 16:15:39.258653 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 13 16:15:39.258808 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 16:15:39.258953 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Feb 13 16:15:39.259096 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Feb 13 16:15:39.261499 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 13 16:15:39.261720 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Feb 13 16:15:39.261873 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Feb 13 16:15:39.262024 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Feb 13 16:15:39.262171 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 13 16:15:39.262363 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Feb 13 16:15:39.262519 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 16:15:39.262683 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Feb 13 16:15:39.262827 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 13 16:15:39.262998 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Feb 13 16:15:39.263582 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Feb 13 16:15:39.263753 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Feb 13 16:15:39.263909 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Feb 13 16:15:39.264102 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Feb 13 16:15:39.265561 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Feb 13 16:15:39.265751 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Feb 13 16:15:39.265772 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 16:15:39.265789 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 16:15:39.265804 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 16:15:39.265820 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 16:15:39.265837 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 16:15:39.265862 kernel: iommu: Default domain type: Translated Feb 13 16:15:39.265878 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 16:15:39.265893 kernel: PCI: Using ACPI for IRQ routing Feb 13 16:15:39.265909 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 16:15:39.265924 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 16:15:39.265940 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Feb 13 16:15:39.266088 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 13 16:15:39.266253 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 13 16:15:39.266396 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 16:15:39.266415 kernel: vgaarb: loaded Feb 13 16:15:39.266435 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 16:15:39.266450 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 16:15:39.266466 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 16:15:39.266481 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 16:15:39.266496 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 16:15:39.266511 kernel: pnp: PnP ACPI init Feb 13 16:15:39.266526 kernel: pnp: PnP ACPI: found 4 devices Feb 13 16:15:39.266541 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 16:15:39.266557 kernel: NET: Registered PF_INET protocol family Feb 13 16:15:39.266575 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 16:15:39.266591 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 16:15:39.266607 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 16:15:39.266622 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 16:15:39.266638 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 16:15:39.266653 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 16:15:39.266669 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 16:15:39.266684 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 16:15:39.266704 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 16:15:39.266719 kernel: NET: Registered PF_XDP protocol family Feb 13 16:15:39.266864 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 16:15:39.266996 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 16:15:39.267125 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 16:15:39.267520 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 16:15:39.267662 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 13 16:15:39.267813 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 13 16:15:39.267963 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 16:15:39.267994 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 16:15:39.268146 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 49599 usecs Feb 13 16:15:39.268165 kernel: PCI: CLS 0 bytes, default 64 Feb 13 16:15:39.268181 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 16:15:39.271165 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c177478, max_idle_ns: 881590705666 ns Feb 13 16:15:39.271229 kernel: Initialise system trusted keyrings Feb 13 16:15:39.271245 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 16:15:39.271261 kernel: Key type asymmetric registered Feb 13 16:15:39.271286 kernel: Asymmetric key parser 'x509' registered Feb 13 16:15:39.271302 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 16:15:39.271318 kernel: io scheduler mq-deadline registered Feb 13 16:15:39.271332 kernel: io scheduler kyber registered Feb 13 16:15:39.271348 kernel: io scheduler bfq registered Feb 13 16:15:39.271363 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 16:15:39.271379 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 13 16:15:39.271395 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 16:15:39.271411 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 16:15:39.271430 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 16:15:39.271446 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 16:15:39.271460 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 16:15:39.271475 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 16:15:39.271491 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 16:15:39.271506 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 16:15:39.271768 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 16:15:39.271911 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 16:15:39.272052 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T16:15:38 UTC (1739463338) Feb 13 16:15:39.272186 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 16:15:39.272247 kernel: intel_pstate: CPU model not supported Feb 13 16:15:39.272261 kernel: NET: Registered PF_INET6 protocol family Feb 13 16:15:39.272275 kernel: Segment Routing with IPv6 Feb 13 16:15:39.272291 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 16:15:39.272306 kernel: NET: Registered PF_PACKET protocol family Feb 13 16:15:39.272322 kernel: Key type dns_resolver registered Feb 13 16:15:39.272337 kernel: IPI shorthand broadcast: enabled Feb 13 16:15:39.272360 kernel: sched_clock: Marking stable (1597008655, 188401438)->(1949209188, -163799095) Feb 13 16:15:39.272375 kernel: registered taskstats version 1 Feb 13 16:15:39.272391 kernel: Loading compiled-in X.509 certificates Feb 13 16:15:39.272407 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 16:15:39.272423 kernel: Key type .fscrypt registered Feb 13 16:15:39.272438 kernel: Key type fscrypt-provisioning registered Feb 13 16:15:39.272453 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 16:15:39.272469 kernel: ima: Allocated hash algorithm: sha1 Feb 13 16:15:39.272484 kernel: ima: No architecture policies found Feb 13 16:15:39.272505 kernel: clk: Disabling unused clocks Feb 13 16:15:39.272521 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 16:15:39.272536 kernel: Write protecting the kernel read-only data: 36864k Feb 13 16:15:39.272573 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 16:15:39.272590 kernel: Run /init as init process Feb 13 16:15:39.272607 kernel: with arguments: Feb 13 16:15:39.272622 kernel: /init Feb 13 16:15:39.272636 kernel: with environment: Feb 13 16:15:39.272652 kernel: HOME=/ Feb 13 16:15:39.272671 kernel: TERM=linux Feb 13 16:15:39.272686 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 16:15:39.272706 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:15:39.272727 systemd[1]: Detected virtualization kvm. Feb 13 16:15:39.272745 systemd[1]: Detected architecture x86-64. Feb 13 16:15:39.272762 systemd[1]: Running in initrd. Feb 13 16:15:39.272777 systemd[1]: No hostname configured, using default hostname. Feb 13 16:15:39.272798 systemd[1]: Hostname set to . Feb 13 16:15:39.272815 systemd[1]: Initializing machine ID from VM UUID. Feb 13 16:15:39.272832 systemd[1]: Queued start job for default target initrd.target. Feb 13 16:15:39.272850 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:15:39.272865 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:15:39.272883 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 16:15:39.272900 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:15:39.272916 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 16:15:39.272937 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 16:15:39.272957 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 16:15:39.272973 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 16:15:39.273005 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:15:39.273021 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:15:39.273038 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:15:39.273056 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:15:39.273077 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:15:39.273094 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:15:39.273114 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:15:39.273132 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:15:39.273149 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 16:15:39.273165 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 16:15:39.273183 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:15:39.273224 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:15:39.273240 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:15:39.273258 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:15:39.273274 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 16:15:39.273287 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:15:39.273303 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 16:15:39.273320 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 16:15:39.273338 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:15:39.273353 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:15:39.273366 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:15:39.273425 systemd-journald[182]: Collecting audit messages is disabled. Feb 13 16:15:39.273462 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 16:15:39.273476 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:15:39.273490 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 16:15:39.273504 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 16:15:39.273522 systemd-journald[182]: Journal started Feb 13 16:15:39.273555 systemd-journald[182]: Runtime Journal (/run/log/journal/9584a5db85fd438eb35440ff586df530) is 4.9M, max 39.3M, 34.4M free. Feb 13 16:15:39.261306 systemd-modules-load[183]: Inserted module 'overlay' Feb 13 16:15:39.278242 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:15:39.290565 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:15:39.314750 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:15:39.382747 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 16:15:39.382797 kernel: Bridge firewalling registered Feb 13 16:15:39.332814 systemd-modules-load[183]: Inserted module 'br_netfilter' Feb 13 16:15:39.384088 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:15:39.385383 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:15:39.387057 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:15:39.396572 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:15:39.399455 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:15:39.406499 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:15:39.438436 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:15:39.447626 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:15:39.460362 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:15:39.469754 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:15:39.475531 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 16:15:39.506899 dracut-cmdline[218]: dracut-dracut-053 Feb 13 16:15:39.513191 systemd-resolved[214]: Positive Trust Anchors: Feb 13 16:15:39.513225 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:15:39.513278 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:15:39.518170 systemd-resolved[214]: Defaulting to hostname 'linux'. Feb 13 16:15:39.521751 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 16:15:39.524814 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:15:39.526003 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:15:39.696316 kernel: SCSI subsystem initialized Feb 13 16:15:39.710239 kernel: Loading iSCSI transport class v2.0-870. Feb 13 16:15:39.728698 kernel: iscsi: registered transport (tcp) Feb 13 16:15:39.782395 kernel: iscsi: registered transport (qla4xxx) Feb 13 16:15:39.782495 kernel: QLogic iSCSI HBA Driver Feb 13 16:15:39.900294 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 16:15:39.913415 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 16:15:39.988418 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 16:15:39.988519 kernel: device-mapper: uevent: version 1.0.3 Feb 13 16:15:39.995354 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 16:15:40.087504 kernel: raid6: avx2x4 gen() 17529 MB/s Feb 13 16:15:40.125059 kernel: raid6: avx2x2 gen() 17765 MB/s Feb 13 16:15:40.141382 kernel: raid6: avx2x1 gen() 13281 MB/s Feb 13 16:15:40.141491 kernel: raid6: using algorithm avx2x2 gen() 17765 MB/s Feb 13 16:15:40.160695 kernel: raid6: .... xor() 9963 MB/s, rmw enabled Feb 13 16:15:40.160815 kernel: raid6: using avx2x2 recovery algorithm Feb 13 16:15:40.195242 kernel: xor: automatically using best checksumming function avx Feb 13 16:15:40.424275 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 16:15:40.449191 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:15:40.467462 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:15:40.486898 systemd-udevd[402]: Using default interface naming scheme 'v255'. Feb 13 16:15:40.493482 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:15:40.503510 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 16:15:40.556232 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Feb 13 16:15:40.633266 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:15:40.644739 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:15:40.736989 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:15:40.746579 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 16:15:40.800253 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 16:15:40.810353 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:15:40.812434 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:15:40.814508 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:15:40.828985 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 16:15:40.874901 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:15:40.929841 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Feb 13 16:15:40.999285 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 16:15:40.999349 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 16:15:41.000619 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 16:15:41.000648 kernel: GPT:9289727 != 125829119 Feb 13 16:15:41.000666 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 16:15:41.000684 kernel: GPT:9289727 != 125829119 Feb 13 16:15:41.000700 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 16:15:41.000716 kernel: scsi host0: Virtio SCSI HBA Feb 13 16:15:41.000770 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 16:15:41.000808 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Feb 13 16:15:41.001033 kernel: virtio_blk virtio5: [vdb] 932 512-byte logical blocks (477 kB/466 KiB) Feb 13 16:15:40.949197 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:15:40.956630 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:15:40.963912 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:15:40.978417 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:15:40.978749 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:15:41.004698 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:15:41.021786 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:15:41.080244 kernel: libata version 3.00 loaded. Feb 13 16:15:41.099231 kernel: ACPI: bus type USB registered Feb 13 16:15:41.107723 kernel: usbcore: registered new interface driver usbfs Feb 13 16:15:41.107811 kernel: usbcore: registered new interface driver hub Feb 13 16:15:41.108363 kernel: usbcore: registered new device driver usb Feb 13 16:15:41.135500 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 13 16:15:41.165519 kernel: scsi host1: ata_piix Feb 13 16:15:41.165731 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (458) Feb 13 16:15:41.165752 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 16:15:41.165770 kernel: AES CTR mode by8 optimization enabled Feb 13 16:15:41.165788 kernel: scsi host2: ata_piix Feb 13 16:15:41.165951 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Feb 13 16:15:41.165981 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Feb 13 16:15:41.149585 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 16:15:41.247310 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (460) Feb 13 16:15:41.164361 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 16:15:41.243857 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:15:41.255875 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 16:15:41.256846 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 16:15:41.269306 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 16:15:41.276670 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 16:15:41.279530 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:15:41.311775 disk-uuid[534]: Primary Header is updated. Feb 13 16:15:41.311775 disk-uuid[534]: Secondary Entries is updated. Feb 13 16:15:41.311775 disk-uuid[534]: Secondary Header is updated. Feb 13 16:15:41.339232 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 16:15:41.341079 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:15:41.426096 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 13 16:15:41.439475 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 13 16:15:41.439661 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 13 16:15:41.439822 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Feb 13 16:15:41.439994 kernel: hub 1-0:1.0: USB hub found Feb 13 16:15:41.440151 kernel: hub 1-0:1.0: 2 ports detected Feb 13 16:15:42.382243 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 16:15:42.382350 disk-uuid[535]: The operation has completed successfully. Feb 13 16:15:42.493423 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 16:15:42.495419 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 16:15:42.527866 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 16:15:42.533588 sh[563]: Success Feb 13 16:15:42.559006 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 16:15:42.740375 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 16:15:42.756554 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 16:15:42.766317 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 16:15:42.803890 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 16:15:42.803992 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 16:15:42.804031 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 16:15:42.809045 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 16:15:42.809144 kernel: BTRFS info (device dm-0): using free space tree Feb 13 16:15:42.835034 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 16:15:42.838171 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 16:15:42.851771 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 16:15:42.857245 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 16:15:42.895238 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 16:15:42.895322 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 16:15:42.900128 kernel: BTRFS info (device vda6): using free space tree Feb 13 16:15:42.919330 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 16:15:42.939471 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 16:15:42.942045 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 16:15:42.954968 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 16:15:42.964588 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 16:15:43.183359 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:15:43.197550 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:15:43.251962 ignition[673]: Ignition 2.20.0 Feb 13 16:15:43.251979 ignition[673]: Stage: fetch-offline Feb 13 16:15:43.252060 ignition[673]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:15:43.252074 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:15:43.253319 ignition[673]: parsed url from cmdline: "" Feb 13 16:15:43.253327 ignition[673]: no config URL provided Feb 13 16:15:43.253341 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 16:15:43.259347 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:15:43.253363 ignition[673]: no config at "/usr/lib/ignition/user.ign" Feb 13 16:15:43.253374 ignition[673]: failed to fetch config: resource requires networking Feb 13 16:15:43.253737 ignition[673]: Ignition finished successfully Feb 13 16:15:43.265480 systemd-networkd[750]: lo: Link UP Feb 13 16:15:43.265487 systemd-networkd[750]: lo: Gained carrier Feb 13 16:15:43.269909 systemd-networkd[750]: Enumeration completed Feb 13 16:15:43.270833 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:15:43.270886 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 16:15:43.270891 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Feb 13 16:15:43.273490 systemd[1]: Reached target network.target - Network. Feb 13 16:15:43.273582 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:15:43.273587 systemd-networkd[750]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 16:15:43.275154 systemd-networkd[750]: eth0: Link UP Feb 13 16:15:43.275159 systemd-networkd[750]: eth0: Gained carrier Feb 13 16:15:43.275170 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 16:15:43.282112 systemd-networkd[750]: eth1: Link UP Feb 13 16:15:43.282118 systemd-networkd[750]: eth1: Gained carrier Feb 13 16:15:43.282136 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:15:43.291647 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 16:15:43.310370 systemd-networkd[750]: eth1: DHCPv4 address 10.124.0.6/20 acquired from 169.254.169.253 Feb 13 16:15:43.319335 systemd-networkd[750]: eth0: DHCPv4 address 137.184.191.138/20, gateway 137.184.176.1 acquired from 169.254.169.253 Feb 13 16:15:43.363822 ignition[757]: Ignition 2.20.0 Feb 13 16:15:43.363841 ignition[757]: Stage: fetch Feb 13 16:15:43.364166 ignition[757]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:15:43.364187 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:15:43.364356 ignition[757]: parsed url from cmdline: "" Feb 13 16:15:43.364360 ignition[757]: no config URL provided Feb 13 16:15:43.364366 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 16:15:43.364376 ignition[757]: no config at "/usr/lib/ignition/user.ign" Feb 13 16:15:43.364402 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Feb 13 16:15:43.387611 ignition[757]: GET result: OK Feb 13 16:15:43.387745 ignition[757]: parsing config with SHA512: 078c0eb81d0159fdcb861e3e9acbccb295a3879e19dec5849696ec8c33e83e12bceeb9eba5ab7e4a087975c7d20ec8a19723f170b705ca06a60868e7d3357936 Feb 13 16:15:43.394919 unknown[757]: fetched base config from "system" Feb 13 16:15:43.394934 unknown[757]: fetched base config from "system" Feb 13 16:15:43.396516 ignition[757]: fetch: fetch complete Feb 13 16:15:43.394944 unknown[757]: fetched user config from "digitalocean" Feb 13 16:15:43.396523 ignition[757]: fetch: fetch passed Feb 13 16:15:43.400125 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 16:15:43.396605 ignition[757]: Ignition finished successfully Feb 13 16:15:43.408519 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 16:15:43.433350 ignition[765]: Ignition 2.20.0 Feb 13 16:15:43.433366 ignition[765]: Stage: kargs Feb 13 16:15:43.433656 ignition[765]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:15:43.433675 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:15:43.434915 ignition[765]: kargs: kargs passed Feb 13 16:15:43.438793 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 16:15:43.434989 ignition[765]: Ignition finished successfully Feb 13 16:15:43.447559 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 16:15:43.479378 ignition[771]: Ignition 2.20.0 Feb 13 16:15:43.479395 ignition[771]: Stage: disks Feb 13 16:15:43.479648 ignition[771]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:15:43.479659 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:15:43.491338 ignition[771]: disks: disks passed Feb 13 16:15:43.491432 ignition[771]: Ignition finished successfully Feb 13 16:15:43.496029 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 16:15:43.502892 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 16:15:43.504256 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 16:15:43.506944 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:15:43.507817 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:15:43.509121 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:15:43.524283 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 16:15:43.566013 systemd-fsck[780]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 16:15:43.572132 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 16:15:43.589963 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 16:15:43.766097 kernel: EXT4-fs (vda9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 16:15:43.768482 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 16:15:43.769899 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 16:15:43.778486 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:15:43.795896 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 16:15:43.801411 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Feb 13 16:15:43.806306 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 16:15:43.812359 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 16:15:43.815385 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:15:43.821881 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (788) Feb 13 16:15:43.818048 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 16:15:43.831571 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 16:15:43.845547 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 16:15:43.850994 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 16:15:43.851091 kernel: BTRFS info (device vda6): using free space tree Feb 13 16:15:43.891470 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 16:15:43.903013 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:15:43.996250 coreos-metadata[790]: Feb 13 16:15:43.991 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 16:15:43.998460 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 16:15:44.004122 coreos-metadata[790]: Feb 13 16:15:44.003 INFO Fetch successful Feb 13 16:15:44.007084 coreos-metadata[791]: Feb 13 16:15:44.005 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 16:15:44.019186 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Feb 13 16:15:44.022313 coreos-metadata[791]: Feb 13 16:15:44.021 INFO Fetch successful Feb 13 16:15:44.029470 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Feb 13 16:15:44.029672 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Feb 13 16:15:44.037944 coreos-metadata[791]: Feb 13 16:15:44.034 INFO wrote hostname ci-4152.2.1-9-8a8a313a66 to /sysroot/etc/hostname Feb 13 16:15:44.036289 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 16:15:44.044754 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 16:15:44.058351 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 16:15:44.270833 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 16:15:44.281515 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 16:15:44.286429 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 16:15:44.299126 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 16:15:44.303520 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 16:15:44.345333 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 16:15:44.363462 ignition[908]: INFO : Ignition 2.20.0 Feb 13 16:15:44.363462 ignition[908]: INFO : Stage: mount Feb 13 16:15:44.363462 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:15:44.363462 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:15:44.367639 ignition[908]: INFO : mount: mount passed Feb 13 16:15:44.367639 ignition[908]: INFO : Ignition finished successfully Feb 13 16:15:44.365903 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 16:15:44.389228 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 16:15:44.419763 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:15:44.433264 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (921) Feb 13 16:15:44.437616 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 16:15:44.437729 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 16:15:44.440137 kernel: BTRFS info (device vda6): using free space tree Feb 13 16:15:44.451341 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 16:15:44.459078 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:15:44.485523 ignition[938]: INFO : Ignition 2.20.0 Feb 13 16:15:44.487357 ignition[938]: INFO : Stage: files Feb 13 16:15:44.487357 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:15:44.487357 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:15:44.492400 ignition[938]: DEBUG : files: compiled without relabeling support, skipping Feb 13 16:15:44.493751 ignition[938]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 16:15:44.495082 ignition[938]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 16:15:44.504733 ignition[938]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 16:15:44.506117 ignition[938]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 16:15:44.507161 ignition[938]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 16:15:44.506283 unknown[938]: wrote ssh authorized keys file for user: core Feb 13 16:15:44.513393 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 16:15:44.515431 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 16:15:44.515431 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 16:15:44.515431 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 16:15:44.515431 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:15:44.520664 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:15:44.520664 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 16:15:44.520664 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 16:15:44.520664 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 16:15:44.520664 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 16:15:45.035688 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 13 16:15:45.225063 systemd-networkd[750]: eth1: Gained IPv6LL Feb 13 16:15:45.288122 systemd-networkd[750]: eth0: Gained IPv6LL Feb 13 16:15:45.509679 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 16:15:45.509679 ignition[938]: INFO : files: op(8): [started] processing unit "containerd.service" Feb 13 16:15:45.514520 ignition[938]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 16:15:45.514520 ignition[938]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 16:15:45.514520 ignition[938]: INFO : files: op(8): [finished] processing unit "containerd.service" Feb 13 16:15:45.514520 ignition[938]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:15:45.514520 ignition[938]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:15:45.514520 ignition[938]: INFO : files: files passed Feb 13 16:15:45.514520 ignition[938]: INFO : Ignition finished successfully Feb 13 16:15:45.514917 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 16:15:45.545910 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 16:15:45.551248 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 16:15:45.558462 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 16:15:45.558608 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 16:15:45.575600 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:15:45.575600 initrd-setup-root-after-ignition[967]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:15:45.579593 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:15:45.583014 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:15:45.585758 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 16:15:45.599848 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 16:15:45.662047 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 16:15:45.662577 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 16:15:45.664461 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 16:15:45.665895 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 16:15:45.668008 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 16:15:45.679996 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 16:15:45.706634 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:15:45.726773 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 16:15:45.743915 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:15:45.756182 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:15:45.757319 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 16:15:45.758085 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 16:15:45.758355 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:15:45.759524 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 16:15:45.760399 systemd[1]: Stopped target basic.target - Basic System. Feb 13 16:15:45.761175 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 16:15:45.762004 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:15:45.762867 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 16:15:45.763756 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 16:15:45.767277 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:15:45.768939 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 16:15:45.785398 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 16:15:45.787416 systemd[1]: Stopped target swap.target - Swaps. Feb 13 16:15:45.789553 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 16:15:45.789845 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:15:45.791947 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:15:45.793449 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:15:45.795336 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 16:15:45.795492 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:15:45.796327 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 16:15:45.796549 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 16:15:45.798885 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 16:15:45.799289 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:15:45.802479 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 16:15:45.802689 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 16:15:45.804525 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 16:15:45.804716 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 16:15:45.847847 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 16:15:45.851723 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 16:15:45.852613 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 16:15:45.852878 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:15:45.855652 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 16:15:45.855854 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:15:45.882238 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 16:15:45.882496 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 16:15:45.894669 ignition[991]: INFO : Ignition 2.20.0 Feb 13 16:15:45.899233 ignition[991]: INFO : Stage: umount Feb 13 16:15:45.899233 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:15:45.899233 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:15:45.899233 ignition[991]: INFO : umount: umount passed Feb 13 16:15:45.899233 ignition[991]: INFO : Ignition finished successfully Feb 13 16:15:45.901389 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 16:15:45.901806 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 16:15:45.903339 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 16:15:45.903412 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 16:15:45.904037 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 16:15:45.904085 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 16:15:45.909010 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 16:15:45.909134 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 16:15:45.910173 systemd[1]: Stopped target network.target - Network. Feb 13 16:15:45.910811 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 16:15:45.910932 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:15:45.913578 systemd[1]: Stopped target paths.target - Path Units. Feb 13 16:15:45.916096 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 16:15:45.922567 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:15:45.941482 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 16:15:45.961964 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 16:15:45.963429 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 16:15:45.963530 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:15:45.965698 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 16:15:45.965801 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:15:45.966917 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 16:15:45.967049 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 16:15:45.998620 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 16:15:45.998745 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 16:15:46.000685 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 16:15:46.016883 systemd-networkd[750]: eth0: DHCPv6 lease lost Feb 13 16:15:46.025453 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 16:15:46.027547 systemd-networkd[750]: eth1: DHCPv6 lease lost Feb 13 16:15:46.029042 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 16:15:46.034587 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 16:15:46.034797 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 16:15:46.041355 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 16:15:46.042004 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 16:15:46.057449 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 16:15:46.057580 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:15:46.064537 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 16:15:46.064838 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 16:15:46.083583 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 16:15:46.086542 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 16:15:46.086714 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:15:46.088483 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:15:46.095674 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 16:15:46.097195 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 16:15:46.113058 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 16:15:46.116094 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:15:46.127880 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 16:15:46.128060 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 16:15:46.130716 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 16:15:46.130864 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 16:15:46.132883 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 16:15:46.132944 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:15:46.134493 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 16:15:46.134592 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:15:46.137618 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 16:15:46.139280 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 16:15:46.140946 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:15:46.141153 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:15:46.152507 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 16:15:46.153425 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 16:15:46.153623 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:15:46.154553 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 16:15:46.154628 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 16:15:46.157509 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 16:15:46.157582 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:15:46.159525 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 16:15:46.159604 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:15:46.160449 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 16:15:46.160532 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:15:46.162069 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 16:15:46.162141 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:15:46.164615 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:15:46.164724 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:15:46.183439 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 16:15:46.183652 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 16:15:46.185915 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 16:15:46.189567 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 16:15:46.230963 systemd[1]: Switching root. Feb 13 16:15:46.291057 systemd-journald[182]: Journal stopped Feb 13 16:15:48.422576 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Feb 13 16:15:48.422695 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 16:15:48.422713 kernel: SELinux: policy capability open_perms=1 Feb 13 16:15:48.422735 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 16:15:48.422764 kernel: SELinux: policy capability always_check_network=0 Feb 13 16:15:48.422776 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 16:15:48.422788 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 16:15:48.422799 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 16:15:48.422822 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 16:15:48.422841 systemd[1]: Successfully loaded SELinux policy in 47.243ms. Feb 13 16:15:48.422872 kernel: audit: type=1403 audit(1739463346.635:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 16:15:48.422892 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.002ms. Feb 13 16:15:48.422918 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:15:48.422936 systemd[1]: Detected virtualization kvm. Feb 13 16:15:48.422954 systemd[1]: Detected architecture x86-64. Feb 13 16:15:48.422974 systemd[1]: Detected first boot. Feb 13 16:15:48.423002 systemd[1]: Hostname set to . Feb 13 16:15:48.423021 systemd[1]: Initializing machine ID from VM UUID. Feb 13 16:15:48.423038 zram_generator::config[1055]: No configuration found. Feb 13 16:15:48.423054 systemd[1]: Populated /etc with preset unit settings. Feb 13 16:15:48.423071 systemd[1]: Queued start job for default target multi-user.target. Feb 13 16:15:48.423084 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 16:15:48.423098 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 16:15:48.423111 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 16:15:48.423122 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 16:15:48.423133 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 16:15:48.423148 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 16:15:48.423170 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 16:15:48.423190 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 16:15:48.423959 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 16:15:48.423990 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:15:48.424021 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:15:48.424035 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 16:15:48.424048 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 16:15:48.424061 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 16:15:48.424073 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:15:48.424084 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 16:15:48.424105 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:15:48.424118 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 16:15:48.424137 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:15:48.424151 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:15:48.424163 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:15:48.424175 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:15:48.424189 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 16:15:48.424234 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 16:15:48.424256 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 16:15:48.424274 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 16:15:48.424296 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:15:48.424314 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:15:48.424332 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:15:48.424350 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 16:15:48.424369 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 16:15:48.424388 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 16:15:48.424415 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 16:15:48.424436 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:15:48.425543 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 16:15:48.425629 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 16:15:48.425651 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 16:15:48.425672 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 16:15:48.425692 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:15:48.425712 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:15:48.425851 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 16:15:48.425873 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:15:48.425894 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 16:15:48.425915 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:15:48.425934 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 16:15:48.425955 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:15:48.425974 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 16:15:48.425988 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 16:15:48.426001 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 16:15:48.426018 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:15:48.426035 kernel: fuse: init (API version 7.39) Feb 13 16:15:48.426049 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:15:48.426061 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 16:15:48.426073 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 16:15:48.426084 kernel: loop: module loaded Feb 13 16:15:48.426096 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:15:48.426108 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:15:48.426123 kernel: ACPI: bus type drm_connector registered Feb 13 16:15:48.426136 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 16:15:48.426154 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 16:15:48.426172 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 16:15:48.426185 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 16:15:48.426197 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 16:15:48.426226 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 16:15:48.426239 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 16:15:48.426313 systemd-journald[1149]: Collecting audit messages is disabled. Feb 13 16:15:48.426357 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:15:48.426377 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 16:15:48.426396 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 16:15:48.426411 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:15:48.426422 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:15:48.426441 systemd-journald[1149]: Journal started Feb 13 16:15:48.426516 systemd-journald[1149]: Runtime Journal (/run/log/journal/9584a5db85fd438eb35440ff586df530) is 4.9M, max 39.3M, 34.4M free. Feb 13 16:15:48.431508 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 16:15:48.431617 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 16:15:48.439652 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:15:48.438486 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:15:48.438800 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:15:48.440903 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 16:15:48.441780 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 16:15:48.443572 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:15:48.443929 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:15:48.445503 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:15:48.447971 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 16:15:48.449795 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 16:15:48.471723 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 16:15:48.482610 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 16:15:48.517625 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 16:15:48.519749 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 16:15:48.534670 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 16:15:48.559855 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 16:15:48.561310 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:15:48.578575 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 16:15:48.584170 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:15:48.605169 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:15:48.624787 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 16:15:48.637486 systemd-journald[1149]: Time spent on flushing to /var/log/journal/9584a5db85fd438eb35440ff586df530 is 123.877ms for 957 entries. Feb 13 16:15:48.637486 systemd-journald[1149]: System Journal (/var/log/journal/9584a5db85fd438eb35440ff586df530) is 8.0M, max 195.6M, 187.6M free. Feb 13 16:15:48.816005 systemd-journald[1149]: Received client request to flush runtime journal. Feb 13 16:15:48.631443 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:15:48.633849 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 16:15:48.635582 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 16:15:48.660061 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 16:15:48.678105 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 16:15:48.684041 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 16:15:48.738135 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:15:48.769576 udevadm[1200]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 16:15:48.779188 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Feb 13 16:15:48.779241 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Feb 13 16:15:48.796832 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:15:48.818879 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 16:15:48.831143 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 16:15:48.915272 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 16:15:48.936627 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:15:48.991321 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Feb 13 16:15:48.991353 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Feb 13 16:15:49.000584 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:15:49.895450 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 16:15:49.911929 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:15:49.993899 systemd-udevd[1223]: Using default interface naming scheme 'v255'. Feb 13 16:15:50.033572 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:15:50.077682 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:15:50.110610 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 16:15:50.196503 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:15:50.196745 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:15:50.208644 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:15:50.225787 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:15:50.244699 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:15:50.249702 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 16:15:50.249807 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 16:15:50.249887 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:15:50.260784 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:15:50.261267 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:15:50.271826 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:15:50.272278 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:15:50.290783 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:15:50.308924 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:15:50.316759 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:15:50.328612 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:15:50.341533 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1226) Feb 13 16:15:50.351196 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Feb 13 16:15:50.381794 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 16:15:50.536883 systemd-networkd[1224]: lo: Link UP Feb 13 16:15:50.536903 systemd-networkd[1224]: lo: Gained carrier Feb 13 16:15:50.543391 systemd-networkd[1224]: Enumeration completed Feb 13 16:15:50.544113 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:15:50.544466 systemd-networkd[1224]: eth0: Configuring with /run/systemd/network/10-82:64:5b:87:86:2d.network. Feb 13 16:15:50.545932 systemd-networkd[1224]: eth1: Configuring with /run/systemd/network/10-5e:3d:4c:75:75:43.network. Feb 13 16:15:50.546844 systemd-networkd[1224]: eth0: Link UP Feb 13 16:15:50.546858 systemd-networkd[1224]: eth0: Gained carrier Feb 13 16:15:50.557584 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 16:15:50.558042 systemd-networkd[1224]: eth1: Link UP Feb 13 16:15:50.558051 systemd-networkd[1224]: eth1: Gained carrier Feb 13 16:15:50.610738 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 16:15:50.648402 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 13 16:15:50.662923 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 16:15:50.675248 kernel: ACPI: button: Power Button [PWRF] Feb 13 16:15:50.726353 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 16:15:50.773440 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 16:15:50.785757 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:15:50.847813 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Feb 13 16:15:50.847960 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Feb 13 16:15:50.857994 kernel: Console: switching to colour dummy device 80x25 Feb 13 16:15:50.870640 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 16:15:50.870813 kernel: [drm] features: -context_init Feb 13 16:15:50.870847 kernel: [drm] number of scanouts: 1 Feb 13 16:15:50.870875 kernel: [drm] number of cap sets: 0 Feb 13 16:15:50.881302 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Feb 13 16:15:50.907246 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Feb 13 16:15:50.907430 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 16:15:50.907111 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:15:50.908195 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:15:50.940250 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 16:15:50.984564 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:15:50.999629 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:15:51.000403 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:15:51.004630 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:15:51.177905 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:15:51.202187 kernel: EDAC MC: Ver: 3.0.0 Feb 13 16:15:51.246057 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 16:15:51.254842 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 16:15:51.306518 lvm[1288]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 16:15:51.345376 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 16:15:51.347063 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:15:51.369515 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 16:15:51.389062 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 16:15:51.422077 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 16:15:51.427024 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 16:15:51.435391 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Feb 13 16:15:51.435551 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 16:15:51.435591 systemd[1]: Reached target machines.target - Containers. Feb 13 16:15:51.438683 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 16:15:51.466919 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 16:15:51.494900 kernel: ISO 9660 Extensions: RRIP_1991A Feb 13 16:15:51.498026 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Feb 13 16:15:51.501068 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:15:51.506929 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 16:15:51.515678 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 16:15:51.534573 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 16:15:51.537570 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:15:51.550596 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 16:15:51.563651 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 16:15:51.571279 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 16:15:51.605852 kernel: loop0: detected capacity change from 0 to 138184 Feb 13 16:15:51.639873 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 16:15:51.645115 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 16:15:51.663985 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 16:15:51.702606 kernel: loop1: detected capacity change from 0 to 140992 Feb 13 16:15:51.779903 kernel: loop2: detected capacity change from 0 to 211296 Feb 13 16:15:51.877359 kernel: loop3: detected capacity change from 0 to 8 Feb 13 16:15:51.880230 systemd-networkd[1224]: eth1: Gained IPv6LL Feb 13 16:15:51.883557 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 16:15:51.936248 kernel: loop4: detected capacity change from 0 to 138184 Feb 13 16:15:51.971479 kernel: loop5: detected capacity change from 0 to 140992 Feb 13 16:15:52.008764 kernel: loop6: detected capacity change from 0 to 211296 Feb 13 16:15:52.068257 kernel: loop7: detected capacity change from 0 to 8 Feb 13 16:15:52.070755 (sd-merge)[1318]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Feb 13 16:15:52.074264 (sd-merge)[1318]: Merged extensions into '/usr'. Feb 13 16:15:52.075308 systemd-networkd[1224]: eth0: Gained IPv6LL Feb 13 16:15:52.086974 systemd[1]: Reloading requested from client PID 1307 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 16:15:52.087514 systemd[1]: Reloading... Feb 13 16:15:52.253451 zram_generator::config[1347]: No configuration found. Feb 13 16:15:52.627151 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:15:52.654243 ldconfig[1304]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 16:15:52.754214 systemd[1]: Reloading finished in 665 ms. Feb 13 16:15:52.774777 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 16:15:52.779185 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 16:15:52.802673 systemd[1]: Starting ensure-sysext.service... Feb 13 16:15:52.819494 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:15:52.833564 systemd[1]: Reloading requested from client PID 1395 ('systemctl') (unit ensure-sysext.service)... Feb 13 16:15:52.833596 systemd[1]: Reloading... Feb 13 16:15:52.889976 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 16:15:52.891542 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 16:15:52.893094 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 16:15:52.894038 systemd-tmpfiles[1396]: ACLs are not supported, ignoring. Feb 13 16:15:52.894133 systemd-tmpfiles[1396]: ACLs are not supported, ignoring. Feb 13 16:15:52.902400 systemd-tmpfiles[1396]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 16:15:52.902418 systemd-tmpfiles[1396]: Skipping /boot Feb 13 16:15:52.930878 systemd-tmpfiles[1396]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 16:15:52.931107 systemd-tmpfiles[1396]: Skipping /boot Feb 13 16:15:52.991262 zram_generator::config[1424]: No configuration found. Feb 13 16:15:53.251452 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:15:53.347767 systemd[1]: Reloading finished in 513 ms. Feb 13 16:15:53.371072 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:15:53.404491 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 16:15:53.419511 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 16:15:53.438502 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 16:15:53.467131 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:15:53.487343 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 16:15:53.504658 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:15:53.504917 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:15:53.510613 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:15:53.531672 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:15:53.553789 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:15:53.558385 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:15:53.558720 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:15:53.584046 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 16:15:53.590171 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:15:53.590837 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:15:53.606047 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:15:53.606426 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:15:53.620414 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:15:53.624668 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:15:53.634384 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 16:15:53.657588 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 16:15:53.671635 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:15:53.673384 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:15:53.679681 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:15:53.695615 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 16:15:53.720702 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:15:53.756032 augenrules[1521]: No rules Feb 13 16:15:53.757348 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:15:53.761761 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:15:53.783733 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 16:15:53.786848 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 16:15:53.787086 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:15:53.791726 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 16:15:53.792035 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 16:15:53.798923 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:15:53.799273 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:15:53.802140 systemd-resolved[1479]: Positive Trust Anchors: Feb 13 16:15:53.802636 systemd-resolved[1479]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:15:53.802691 systemd-resolved[1479]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:15:53.808645 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 16:15:53.809900 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 16:15:53.819636 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:15:53.819914 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:15:53.832719 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:15:53.833070 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:15:53.837933 systemd-resolved[1479]: Using system hostname 'ci-4152.2.1-9-8a8a313a66'. Feb 13 16:15:53.839980 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 16:15:53.847287 systemd[1]: Finished ensure-sysext.service. Feb 13 16:15:53.859620 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:15:53.882129 systemd[1]: Reached target network.target - Network. Feb 13 16:15:53.891961 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 16:15:53.892920 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:15:53.894010 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:15:53.894145 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:15:53.934668 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 16:15:54.029656 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 16:15:54.030583 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:15:54.031506 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 16:15:54.032332 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 16:15:54.033131 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 16:15:54.033996 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 16:15:54.034128 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:15:54.034589 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 16:15:54.035496 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 16:15:54.036492 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 16:15:54.038713 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:15:54.043540 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 16:15:54.052409 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 16:15:54.059454 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 16:15:54.066162 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 16:15:54.067945 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:15:54.068725 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:15:54.072402 systemd[1]: System is tainted: cgroupsv1 Feb 13 16:15:54.072499 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 16:15:54.072542 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 16:15:54.084493 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 16:15:54.091400 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 16:15:54.918991 systemd-timesyncd[1541]: Contacted time server 104.167.215.195:123 (0.flatcar.pool.ntp.org). Feb 13 16:15:54.919067 systemd-timesyncd[1541]: Initial clock synchronization to Thu 2025-02-13 16:15:54.918742 UTC. Feb 13 16:15:54.919149 systemd-resolved[1479]: Clock change detected. Flushing caches. Feb 13 16:15:54.947357 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 16:15:54.962445 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 16:15:54.969254 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 16:15:54.971866 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 16:15:54.984070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:15:55.012169 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 16:15:55.029155 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 16:15:55.036329 coreos-metadata[1546]: Feb 13 16:15:55.035 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 16:15:55.044058 jq[1551]: false Feb 13 16:15:55.050670 dbus-daemon[1548]: [system] SELinux support is enabled Feb 13 16:15:55.060208 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 16:15:55.068258 coreos-metadata[1546]: Feb 13 16:15:55.066 INFO Fetch successful Feb 13 16:15:55.074075 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 16:15:55.087515 extend-filesystems[1552]: Found loop4 Feb 13 16:15:55.087515 extend-filesystems[1552]: Found loop5 Feb 13 16:15:55.087515 extend-filesystems[1552]: Found loop6 Feb 13 16:15:55.087515 extend-filesystems[1552]: Found loop7 Feb 13 16:15:55.087515 extend-filesystems[1552]: Found vda Feb 13 16:15:55.087515 extend-filesystems[1552]: Found vda1 Feb 13 16:15:55.087515 extend-filesystems[1552]: Found vda2 Feb 13 16:15:55.087515 extend-filesystems[1552]: Found vda3 Feb 13 16:15:55.087515 extend-filesystems[1552]: Found usr Feb 13 16:15:55.087515 extend-filesystems[1552]: Found vda4 Feb 13 16:15:55.087515 extend-filesystems[1552]: Found vda6 Feb 13 16:15:55.087515 extend-filesystems[1552]: Found vda7 Feb 13 16:15:55.087515 extend-filesystems[1552]: Found vda9 Feb 13 16:15:55.087515 extend-filesystems[1552]: Checking size of /dev/vda9 Feb 13 16:15:55.100256 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 16:15:55.108858 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 16:15:55.129968 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 16:15:55.167135 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 16:15:55.192818 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 16:15:55.232160 extend-filesystems[1552]: Resized partition /dev/vda9 Feb 13 16:15:55.240973 extend-filesystems[1586]: resize2fs 1.47.1 (20-May-2024) Feb 13 16:15:55.234508 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 16:15:55.258292 jq[1580]: true Feb 13 16:15:55.274415 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Feb 13 16:15:55.234803 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 16:15:55.236864 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 16:15:55.237275 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 16:15:55.251078 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 16:15:55.257105 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 16:15:55.261689 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 16:15:55.292588 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 16:15:55.292696 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 16:15:55.304609 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 16:15:55.304752 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Feb 13 16:15:55.304791 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 16:15:55.322499 update_engine[1569]: I20250213 16:15:55.320050 1569 main.cc:92] Flatcar Update Engine starting Feb 13 16:15:55.345068 update_engine[1569]: I20250213 16:15:55.343232 1569 update_check_scheduler.cc:74] Next update check in 3m47s Feb 13 16:15:55.341235 (ntainerd)[1602]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 16:15:55.356080 systemd[1]: Started update-engine.service - Update Engine. Feb 13 16:15:55.360583 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 16:15:55.373524 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 16:15:55.382168 jq[1592]: true Feb 13 16:15:55.405522 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 16:15:55.406476 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 16:15:55.491967 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1610) Feb 13 16:15:55.586076 systemd-logind[1562]: New seat seat0. Feb 13 16:15:55.608728 systemd-logind[1562]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 16:15:55.612205 systemd-logind[1562]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 16:15:55.612615 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 16:15:55.634402 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 16:15:55.638300 bash[1633]: Updated "/home/core/.ssh/authorized_keys" Feb 13 16:15:55.650485 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 16:15:55.675453 systemd[1]: Starting sshkeys.service... Feb 13 16:15:55.717708 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 16:15:55.750245 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 16:15:55.766227 extend-filesystems[1586]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 16:15:55.766227 extend-filesystems[1586]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 16:15:55.766227 extend-filesystems[1586]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 16:15:55.793137 extend-filesystems[1552]: Resized filesystem in /dev/vda9 Feb 13 16:15:55.793137 extend-filesystems[1552]: Found vdb Feb 13 16:15:55.773632 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 16:15:55.775521 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 16:15:55.981058 coreos-metadata[1641]: Feb 13 16:15:55.977 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 16:15:56.003295 coreos-metadata[1641]: Feb 13 16:15:56.001 INFO Fetch successful Feb 13 16:15:56.048580 unknown[1641]: wrote ssh authorized keys file for user: core Feb 13 16:15:56.089700 containerd[1602]: time="2025-02-13T16:15:56.087849239Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 16:15:56.090830 locksmithd[1608]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 16:15:56.119061 update-ssh-keys[1662]: Updated "/home/core/.ssh/authorized_keys" Feb 13 16:15:56.115221 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 16:15:56.130646 systemd[1]: Finished sshkeys.service. Feb 13 16:15:56.235037 containerd[1602]: time="2025-02-13T16:15:56.231875482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:15:56.242629 containerd[1602]: time="2025-02-13T16:15:56.242533989Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:15:56.242629 containerd[1602]: time="2025-02-13T16:15:56.242615267Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 16:15:56.242629 containerd[1602]: time="2025-02-13T16:15:56.242642752Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 16:15:56.242973 containerd[1602]: time="2025-02-13T16:15:56.242947013Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 16:15:56.243034 containerd[1602]: time="2025-02-13T16:15:56.242989433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 16:15:56.243121 containerd[1602]: time="2025-02-13T16:15:56.243098478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:15:56.243143 containerd[1602]: time="2025-02-13T16:15:56.243126136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:15:56.243540 containerd[1602]: time="2025-02-13T16:15:56.243500455Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:15:56.243540 containerd[1602]: time="2025-02-13T16:15:56.243539053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 16:15:56.243616 containerd[1602]: time="2025-02-13T16:15:56.243564697Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:15:56.243616 containerd[1602]: time="2025-02-13T16:15:56.243583123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 16:15:56.244244 containerd[1602]: time="2025-02-13T16:15:56.244147135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:15:56.244645 containerd[1602]: time="2025-02-13T16:15:56.244617748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:15:56.246820 containerd[1602]: time="2025-02-13T16:15:56.244899212Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:15:56.246968 containerd[1602]: time="2025-02-13T16:15:56.246842345Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 16:15:56.249458 containerd[1602]: time="2025-02-13T16:15:56.249368876Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 16:15:56.249616 containerd[1602]: time="2025-02-13T16:15:56.249533097Z" level=info msg="metadata content store policy set" policy=shared Feb 13 16:15:56.270007 containerd[1602]: time="2025-02-13T16:15:56.267590718Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 16:15:56.271089 containerd[1602]: time="2025-02-13T16:15:56.270051375Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 16:15:56.271089 containerd[1602]: time="2025-02-13T16:15:56.270120143Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 16:15:56.271089 containerd[1602]: time="2025-02-13T16:15:56.270146427Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 16:15:56.271089 containerd[1602]: time="2025-02-13T16:15:56.270197184Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 16:15:56.271089 containerd[1602]: time="2025-02-13T16:15:56.270807994Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 16:15:56.275951 containerd[1602]: time="2025-02-13T16:15:56.273002624Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 16:15:56.275951 containerd[1602]: time="2025-02-13T16:15:56.273281212Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 16:15:56.275951 containerd[1602]: time="2025-02-13T16:15:56.273313459Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 16:15:56.275951 containerd[1602]: time="2025-02-13T16:15:56.273339987Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 16:15:56.275951 containerd[1602]: time="2025-02-13T16:15:56.273361900Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 16:15:56.275951 containerd[1602]: time="2025-02-13T16:15:56.273376315Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 16:15:56.275951 containerd[1602]: time="2025-02-13T16:15:56.273390247Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 16:15:56.275951 containerd[1602]: time="2025-02-13T16:15:56.273408308Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 16:15:56.275951 containerd[1602]: time="2025-02-13T16:15:56.273423653Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 16:15:56.275951 containerd[1602]: time="2025-02-13T16:15:56.273441992Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 16:15:56.275951 containerd[1602]: time="2025-02-13T16:15:56.273456842Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 16:15:56.275951 containerd[1602]: time="2025-02-13T16:15:56.273469745Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 16:15:56.275951 containerd[1602]: time="2025-02-13T16:15:56.273495990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 16:15:56.275951 containerd[1602]: time="2025-02-13T16:15:56.273536481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 16:15:56.276547 containerd[1602]: time="2025-02-13T16:15:56.273558364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 16:15:56.276547 containerd[1602]: time="2025-02-13T16:15:56.273580090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 16:15:56.276547 containerd[1602]: time="2025-02-13T16:15:56.273600142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 16:15:56.276547 containerd[1602]: time="2025-02-13T16:15:56.273614718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 16:15:56.276547 containerd[1602]: time="2025-02-13T16:15:56.273627354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 16:15:56.276547 containerd[1602]: time="2025-02-13T16:15:56.273640620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 16:15:56.276547 containerd[1602]: time="2025-02-13T16:15:56.273655736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 16:15:56.276547 containerd[1602]: time="2025-02-13T16:15:56.273672152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 16:15:56.276547 containerd[1602]: time="2025-02-13T16:15:56.273684117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 16:15:56.276547 containerd[1602]: time="2025-02-13T16:15:56.273696273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 16:15:56.276547 containerd[1602]: time="2025-02-13T16:15:56.273708859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 16:15:56.276547 containerd[1602]: time="2025-02-13T16:15:56.273723133Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 16:15:56.276547 containerd[1602]: time="2025-02-13T16:15:56.273750560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 16:15:56.276547 containerd[1602]: time="2025-02-13T16:15:56.273764315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 16:15:56.276547 containerd[1602]: time="2025-02-13T16:15:56.273775172Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 16:15:56.277127 containerd[1602]: time="2025-02-13T16:15:56.273836524Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 16:15:56.277127 containerd[1602]: time="2025-02-13T16:15:56.273857506Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 16:15:56.277127 containerd[1602]: time="2025-02-13T16:15:56.273871142Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 16:15:56.277127 containerd[1602]: time="2025-02-13T16:15:56.273883078Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 16:15:56.277127 containerd[1602]: time="2025-02-13T16:15:56.273894658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 16:15:56.277127 containerd[1602]: time="2025-02-13T16:15:56.273947786Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 16:15:56.277127 containerd[1602]: time="2025-02-13T16:15:56.273965918Z" level=info msg="NRI interface is disabled by configuration." Feb 13 16:15:56.277127 containerd[1602]: time="2025-02-13T16:15:56.273977605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 16:15:56.277385 containerd[1602]: time="2025-02-13T16:15:56.274354084Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 16:15:56.277385 containerd[1602]: time="2025-02-13T16:15:56.274434865Z" level=info msg="Connect containerd service" Feb 13 16:15:56.277385 containerd[1602]: time="2025-02-13T16:15:56.274494759Z" level=info msg="using legacy CRI server" Feb 13 16:15:56.277385 containerd[1602]: time="2025-02-13T16:15:56.274514033Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 16:15:56.277385 containerd[1602]: time="2025-02-13T16:15:56.274698344Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 16:15:56.283951 containerd[1602]: time="2025-02-13T16:15:56.280682680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 16:15:56.283951 containerd[1602]: time="2025-02-13T16:15:56.280900218Z" level=info msg="Start subscribing containerd event" Feb 13 16:15:56.283951 containerd[1602]: time="2025-02-13T16:15:56.281020660Z" level=info msg="Start recovering state" Feb 13 16:15:56.283951 containerd[1602]: time="2025-02-13T16:15:56.281131973Z" level=info msg="Start event monitor" Feb 13 16:15:56.283951 containerd[1602]: time="2025-02-13T16:15:56.281209888Z" level=info msg="Start snapshots syncer" Feb 13 16:15:56.283951 containerd[1602]: time="2025-02-13T16:15:56.281230533Z" level=info msg="Start cni network conf syncer for default" Feb 13 16:15:56.283951 containerd[1602]: time="2025-02-13T16:15:56.281260338Z" level=info msg="Start streaming server" Feb 13 16:15:56.283951 containerd[1602]: time="2025-02-13T16:15:56.283183184Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 16:15:56.283951 containerd[1602]: time="2025-02-13T16:15:56.283274729Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 16:15:56.283951 containerd[1602]: time="2025-02-13T16:15:56.283370321Z" level=info msg="containerd successfully booted in 0.208418s" Feb 13 16:15:56.283599 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 16:15:56.380563 sshd_keygen[1590]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 16:15:56.422806 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 16:15:56.434543 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 16:15:56.476184 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 16:15:56.476515 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 16:15:56.495248 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 16:15:56.535967 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 16:15:56.550967 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 16:15:56.569488 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 16:15:56.573806 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 16:15:57.321275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:15:57.326844 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 16:15:57.330520 systemd[1]: Startup finished in 9.628s (kernel) + 9.915s (userspace) = 19.543s. Feb 13 16:15:57.341355 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:15:58.427119 kubelet[1700]: E0213 16:15:58.426986 1700 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:15:58.430885 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:15:58.431361 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:16:04.083460 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 16:16:04.103517 systemd[1]: Started sshd@0-137.184.191.138:22-139.178.89.65:35788.service - OpenSSH per-connection server daemon (139.178.89.65:35788). Feb 13 16:16:04.269694 sshd[1713]: Accepted publickey for core from 139.178.89.65 port 35788 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:16:04.274304 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:16:04.300221 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 16:16:04.301017 systemd-logind[1562]: New session 1 of user core. Feb 13 16:16:04.334503 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 16:16:04.361123 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 16:16:04.378590 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 16:16:04.388281 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 16:16:04.550043 systemd[1719]: Queued start job for default target default.target. Feb 13 16:16:04.550723 systemd[1719]: Created slice app.slice - User Application Slice. Feb 13 16:16:04.550769 systemd[1719]: Reached target paths.target - Paths. Feb 13 16:16:04.550791 systemd[1719]: Reached target timers.target - Timers. Feb 13 16:16:04.558131 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 16:16:04.586053 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 16:16:04.586165 systemd[1719]: Reached target sockets.target - Sockets. Feb 13 16:16:04.586186 systemd[1719]: Reached target basic.target - Basic System. Feb 13 16:16:04.586257 systemd[1719]: Reached target default.target - Main User Target. Feb 13 16:16:04.586339 systemd[1719]: Startup finished in 185ms. Feb 13 16:16:04.586587 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 16:16:04.593021 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 16:16:04.679532 systemd[1]: Started sshd@1-137.184.191.138:22-139.178.89.65:40780.service - OpenSSH per-connection server daemon (139.178.89.65:40780). Feb 13 16:16:04.763820 sshd[1731]: Accepted publickey for core from 139.178.89.65 port 40780 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:16:04.769511 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:16:04.781877 systemd-logind[1562]: New session 2 of user core. Feb 13 16:16:04.788712 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 16:16:04.863942 sshd[1734]: Connection closed by 139.178.89.65 port 40780 Feb 13 16:16:04.864965 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Feb 13 16:16:04.871155 systemd[1]: sshd@1-137.184.191.138:22-139.178.89.65:40780.service: Deactivated successfully. Feb 13 16:16:04.878501 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 16:16:04.880858 systemd-logind[1562]: Session 2 logged out. Waiting for processes to exit. Feb 13 16:16:04.894061 systemd[1]: Started sshd@2-137.184.191.138:22-139.178.89.65:40790.service - OpenSSH per-connection server daemon (139.178.89.65:40790). Feb 13 16:16:04.897331 systemd-logind[1562]: Removed session 2. Feb 13 16:16:04.978623 sshd[1739]: Accepted publickey for core from 139.178.89.65 port 40790 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:16:04.985977 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:16:04.993335 systemd-logind[1562]: New session 3 of user core. Feb 13 16:16:05.003491 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 16:16:05.084662 sshd[1742]: Connection closed by 139.178.89.65 port 40790 Feb 13 16:16:05.084427 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Feb 13 16:16:05.096867 systemd[1]: Started sshd@3-137.184.191.138:22-139.178.89.65:40798.service - OpenSSH per-connection server daemon (139.178.89.65:40798). Feb 13 16:16:05.097733 systemd[1]: sshd@2-137.184.191.138:22-139.178.89.65:40790.service: Deactivated successfully. Feb 13 16:16:05.102716 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 16:16:05.114123 systemd-logind[1562]: Session 3 logged out. Waiting for processes to exit. Feb 13 16:16:05.139726 systemd-logind[1562]: Removed session 3. Feb 13 16:16:05.187784 sshd[1745]: Accepted publickey for core from 139.178.89.65 port 40798 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:16:05.190631 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:16:05.207520 systemd-logind[1562]: New session 4 of user core. Feb 13 16:16:05.222302 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 16:16:05.312189 sshd[1750]: Connection closed by 139.178.89.65 port 40798 Feb 13 16:16:05.314621 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Feb 13 16:16:05.326460 systemd[1]: Started sshd@4-137.184.191.138:22-139.178.89.65:40808.service - OpenSSH per-connection server daemon (139.178.89.65:40808). Feb 13 16:16:05.327244 systemd[1]: sshd@3-137.184.191.138:22-139.178.89.65:40798.service: Deactivated successfully. Feb 13 16:16:05.333651 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 16:16:05.334351 systemd-logind[1562]: Session 4 logged out. Waiting for processes to exit. Feb 13 16:16:05.340713 systemd-logind[1562]: Removed session 4. Feb 13 16:16:05.393680 sshd[1752]: Accepted publickey for core from 139.178.89.65 port 40808 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:16:05.394693 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:16:05.405073 systemd-logind[1562]: New session 5 of user core. Feb 13 16:16:05.411524 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 16:16:05.516897 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 16:16:05.517450 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:16:05.538055 sudo[1759]: pam_unix(sudo:session): session closed for user root Feb 13 16:16:05.541737 sshd[1758]: Connection closed by 139.178.89.65 port 40808 Feb 13 16:16:05.542980 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Feb 13 16:16:05.558038 systemd[1]: Started sshd@5-137.184.191.138:22-139.178.89.65:40818.service - OpenSSH per-connection server daemon (139.178.89.65:40818). Feb 13 16:16:05.559003 systemd[1]: sshd@4-137.184.191.138:22-139.178.89.65:40808.service: Deactivated successfully. Feb 13 16:16:05.566423 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 16:16:05.569655 systemd-logind[1562]: Session 5 logged out. Waiting for processes to exit. Feb 13 16:16:05.576618 systemd-logind[1562]: Removed session 5. Feb 13 16:16:05.639866 sshd[1761]: Accepted publickey for core from 139.178.89.65 port 40818 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:16:05.646716 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:16:05.661980 systemd-logind[1562]: New session 6 of user core. Feb 13 16:16:05.678981 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 16:16:05.765500 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 16:16:05.766103 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:16:05.783893 sudo[1769]: pam_unix(sudo:session): session closed for user root Feb 13 16:16:05.800433 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 16:16:05.801823 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:16:05.849535 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 16:16:05.934039 augenrules[1791]: No rules Feb 13 16:16:05.933817 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 16:16:05.934146 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 16:16:05.936840 sudo[1768]: pam_unix(sudo:session): session closed for user root Feb 13 16:16:05.946565 sshd[1767]: Connection closed by 139.178.89.65 port 40818 Feb 13 16:16:05.947225 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Feb 13 16:16:05.956445 systemd[1]: Started sshd@6-137.184.191.138:22-139.178.89.65:40820.service - OpenSSH per-connection server daemon (139.178.89.65:40820). Feb 13 16:16:05.959198 systemd[1]: sshd@5-137.184.191.138:22-139.178.89.65:40818.service: Deactivated successfully. Feb 13 16:16:05.963863 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 16:16:05.969305 systemd-logind[1562]: Session 6 logged out. Waiting for processes to exit. Feb 13 16:16:05.971543 systemd-logind[1562]: Removed session 6. Feb 13 16:16:06.035481 sshd[1797]: Accepted publickey for core from 139.178.89.65 port 40820 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:16:06.037475 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:16:06.050296 systemd-logind[1562]: New session 7 of user core. Feb 13 16:16:06.061169 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 16:16:06.132400 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 16:16:06.132877 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:16:07.751388 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:16:07.762738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:16:07.851503 systemd[1]: Reloading requested from client PID 1843 ('systemctl') (unit session-7.scope)... Feb 13 16:16:07.851532 systemd[1]: Reloading... Feb 13 16:16:08.057953 zram_generator::config[1885]: No configuration found. Feb 13 16:16:08.336844 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:16:08.464155 systemd[1]: Reloading finished in 611 ms. Feb 13 16:16:08.532175 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 16:16:08.532340 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 16:16:08.532801 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:16:08.555338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:16:08.797255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:16:08.806877 (kubelet)[1944]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 16:16:08.928605 kubelet[1944]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:16:08.928605 kubelet[1944]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 16:16:08.928605 kubelet[1944]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:16:08.928605 kubelet[1944]: I0213 16:16:08.928692 1944 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 16:16:09.469022 kubelet[1944]: I0213 16:16:09.460787 1944 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 16:16:09.469022 kubelet[1944]: I0213 16:16:09.460839 1944 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 16:16:09.469022 kubelet[1944]: I0213 16:16:09.461199 1944 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 16:16:09.513073 kubelet[1944]: I0213 16:16:09.512995 1944 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:16:09.541506 kubelet[1944]: I0213 16:16:09.540800 1944 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 16:16:09.545018 kubelet[1944]: I0213 16:16:09.544842 1944 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 16:16:09.545610 kubelet[1944]: I0213 16:16:09.545583 1944 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 16:16:09.546629 kubelet[1944]: I0213 16:16:09.545942 1944 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 16:16:09.546629 kubelet[1944]: I0213 16:16:09.545971 1944 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 16:16:09.546629 kubelet[1944]: I0213 16:16:09.546150 1944 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:16:09.546629 kubelet[1944]: I0213 16:16:09.546294 1944 kubelet.go:396] "Attempting to sync node with API server" Feb 13 16:16:09.546629 kubelet[1944]: I0213 16:16:09.546316 1944 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 16:16:09.546629 kubelet[1944]: I0213 16:16:09.546372 1944 kubelet.go:312] "Adding apiserver pod source" Feb 13 16:16:09.546629 kubelet[1944]: I0213 16:16:09.546396 1944 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 16:16:09.555933 kubelet[1944]: I0213 16:16:09.549450 1944 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 16:16:09.555933 kubelet[1944]: I0213 16:16:09.554345 1944 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 16:16:09.557067 kubelet[1944]: W0213 16:16:09.557023 1944 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 16:16:09.557822 kubelet[1944]: E0213 16:16:09.557793 1944 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:09.558047 kubelet[1944]: E0213 16:16:09.558032 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:09.558143 kubelet[1944]: I0213 16:16:09.558038 1944 server.go:1256] "Started kubelet" Feb 13 16:16:09.558764 kubelet[1944]: I0213 16:16:09.558108 1944 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 16:16:09.559528 kubelet[1944]: I0213 16:16:09.559507 1944 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 16:16:09.563780 kubelet[1944]: I0213 16:16:09.563744 1944 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 16:16:09.575891 kubelet[1944]: I0213 16:16:09.573778 1944 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 16:16:09.575891 kubelet[1944]: I0213 16:16:09.575133 1944 server.go:461] "Adding debug handlers to kubelet server" Feb 13 16:16:09.583194 kubelet[1944]: I0213 16:16:09.583135 1944 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 16:16:09.585217 kubelet[1944]: I0213 16:16:09.584298 1944 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 16:16:09.585217 kubelet[1944]: I0213 16:16:09.584396 1944 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 16:16:09.588477 kubelet[1944]: I0213 16:16:09.588438 1944 factory.go:221] Registration of the systemd container factory successfully Feb 13 16:16:09.588981 kubelet[1944]: I0213 16:16:09.588954 1944 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 16:16:09.590296 kubelet[1944]: W0213 16:16:09.590273 1944 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 16:16:09.590449 kubelet[1944]: E0213 16:16:09.590433 1944 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 16:16:09.592902 kubelet[1944]: E0213 16:16:09.592830 1944 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{137.184.191.138.1823d0b7f24f19a0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:137.184.191.138,UID:137.184.191.138,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:137.184.191.138,},FirstTimestamp:2025-02-13 16:16:09.558006176 +0000 UTC m=+0.744269233,LastTimestamp:2025-02-13 16:16:09.558006176 +0000 UTC m=+0.744269233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:137.184.191.138,}" Feb 13 16:16:09.593745 kubelet[1944]: W0213 16:16:09.593406 1944 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "137.184.191.138" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 16:16:09.593745 kubelet[1944]: E0213 16:16:09.593447 1944 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "137.184.191.138" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 16:16:09.599124 kubelet[1944]: I0213 16:16:09.598477 1944 factory.go:221] Registration of the containerd container factory successfully Feb 13 16:16:09.605436 kubelet[1944]: W0213 16:16:09.605394 1944 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 16:16:09.605436 kubelet[1944]: E0213 16:16:09.605441 1944 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 16:16:09.622357 kubelet[1944]: E0213 16:16:09.622306 1944 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"137.184.191.138\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 16:16:09.674959 kubelet[1944]: I0213 16:16:09.674631 1944 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 16:16:09.674959 kubelet[1944]: I0213 16:16:09.674666 1944 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 16:16:09.674959 kubelet[1944]: I0213 16:16:09.674695 1944 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:16:09.681437 kubelet[1944]: I0213 16:16:09.681382 1944 policy_none.go:49] "None policy: Start" Feb 13 16:16:09.682961 kubelet[1944]: I0213 16:16:09.682902 1944 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 16:16:09.683177 kubelet[1944]: I0213 16:16:09.683161 1944 state_mem.go:35] "Initializing new in-memory state store" Feb 13 16:16:09.689388 kubelet[1944]: I0213 16:16:09.687855 1944 kubelet_node_status.go:73] "Attempting to register node" node="137.184.191.138" Feb 13 16:16:09.701097 kubelet[1944]: I0213 16:16:09.700877 1944 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 16:16:09.703296 kubelet[1944]: I0213 16:16:09.703251 1944 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 16:16:09.718027 kubelet[1944]: E0213 16:16:09.717801 1944 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"137.184.191.138\" not found" Feb 13 16:16:09.728976 kubelet[1944]: I0213 16:16:09.726899 1944 kubelet_node_status.go:76] "Successfully registered node" node="137.184.191.138" Feb 13 16:16:09.758079 kubelet[1944]: E0213 16:16:09.758015 1944 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.191.138\" not found" Feb 13 16:16:09.764185 kubelet[1944]: I0213 16:16:09.763968 1944 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 16:16:09.766502 kubelet[1944]: I0213 16:16:09.766467 1944 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 16:16:09.767467 kubelet[1944]: I0213 16:16:09.766837 1944 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 16:16:09.767467 kubelet[1944]: I0213 16:16:09.766869 1944 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 16:16:09.767467 kubelet[1944]: E0213 16:16:09.767150 1944 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 16:16:09.858178 kubelet[1944]: E0213 16:16:09.858121 1944 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.191.138\" not found" Feb 13 16:16:09.959455 kubelet[1944]: E0213 16:16:09.959400 1944 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.191.138\" not found" Feb 13 16:16:10.061310 kubelet[1944]: E0213 16:16:10.061233 1944 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.191.138\" not found" Feb 13 16:16:10.162052 kubelet[1944]: E0213 16:16:10.161967 1944 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.191.138\" not found" Feb 13 16:16:10.262318 kubelet[1944]: E0213 16:16:10.262232 1944 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.191.138\" not found" Feb 13 16:16:10.363517 kubelet[1944]: E0213 16:16:10.363304 1944 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.191.138\" not found" Feb 13 16:16:10.376205 sudo[1804]: pam_unix(sudo:session): session closed for user root Feb 13 16:16:10.380050 sshd[1803]: Connection closed by 139.178.89.65 port 40820 Feb 13 16:16:10.380939 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Feb 13 16:16:10.387962 systemd[1]: sshd@6-137.184.191.138:22-139.178.89.65:40820.service: Deactivated successfully. Feb 13 16:16:10.396326 systemd-logind[1562]: Session 7 logged out. Waiting for processes to exit. Feb 13 16:16:10.396799 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 16:16:10.401058 systemd-logind[1562]: Removed session 7. Feb 13 16:16:10.464819 kubelet[1944]: E0213 16:16:10.464734 1944 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.191.138\" not found" Feb 13 16:16:10.472786 kubelet[1944]: I0213 16:16:10.472624 1944 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 16:16:10.473086 kubelet[1944]: W0213 16:16:10.472966 1944 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Feb 13 16:16:10.558463 kubelet[1944]: E0213 16:16:10.558371 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:10.565118 kubelet[1944]: E0213 16:16:10.565031 1944 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.191.138\" not found" Feb 13 16:16:10.665782 kubelet[1944]: E0213 16:16:10.665419 1944 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.191.138\" not found" Feb 13 16:16:10.767234 kubelet[1944]: E0213 16:16:10.767158 1944 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.191.138\" not found" Feb 13 16:16:10.868440 kubelet[1944]: E0213 16:16:10.868317 1944 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"137.184.191.138\" not found" Feb 13 16:16:10.970728 kubelet[1944]: I0213 16:16:10.970443 1944 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 16:16:10.973239 containerd[1602]: time="2025-02-13T16:16:10.972526825Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 16:16:10.977868 kubelet[1944]: I0213 16:16:10.975009 1944 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 16:16:11.559567 kubelet[1944]: I0213 16:16:11.559454 1944 apiserver.go:52] "Watching apiserver" Feb 13 16:16:11.560154 kubelet[1944]: E0213 16:16:11.560104 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:11.566224 kubelet[1944]: I0213 16:16:11.565720 1944 topology_manager.go:215] "Topology Admit Handler" podUID="1e64bf96-dde5-4fa1-91f9-c0463e99a98a" podNamespace="kube-system" podName="cilium-6rngw" Feb 13 16:16:11.566224 kubelet[1944]: I0213 16:16:11.565900 1944 topology_manager.go:215] "Topology Admit Handler" podUID="b7ed8b2b-f233-4f45-8f22-0043ad377d0a" podNamespace="kube-system" podName="kube-proxy-g7mcs" Feb 13 16:16:11.585954 kubelet[1944]: I0213 16:16:11.585863 1944 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 16:16:11.616149 kubelet[1944]: I0213 16:16:11.604660 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-cilium-config-path\") pod \"cilium-6rngw\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " pod="kube-system/cilium-6rngw" Feb 13 16:16:11.618589 kubelet[1944]: I0213 16:16:11.617687 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-hubble-tls\") pod \"cilium-6rngw\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " pod="kube-system/cilium-6rngw" Feb 13 16:16:11.618589 kubelet[1944]: I0213 16:16:11.617735 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7ed8b2b-f233-4f45-8f22-0043ad377d0a-xtables-lock\") pod \"kube-proxy-g7mcs\" (UID: \"b7ed8b2b-f233-4f45-8f22-0043ad377d0a\") " pod="kube-system/kube-proxy-g7mcs" Feb 13 16:16:11.618589 kubelet[1944]: I0213 16:16:11.617764 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7ed8b2b-f233-4f45-8f22-0043ad377d0a-lib-modules\") pod \"kube-proxy-g7mcs\" (UID: \"b7ed8b2b-f233-4f45-8f22-0043ad377d0a\") " pod="kube-system/kube-proxy-g7mcs" Feb 13 16:16:11.618589 kubelet[1944]: I0213 16:16:11.617797 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-cilium-run\") pod \"cilium-6rngw\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " pod="kube-system/cilium-6rngw" Feb 13 16:16:11.618589 kubelet[1944]: I0213 16:16:11.617836 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-clustermesh-secrets\") pod \"cilium-6rngw\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " pod="kube-system/cilium-6rngw" Feb 13 16:16:11.618589 kubelet[1944]: I0213 16:16:11.617864 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-xtables-lock\") pod \"cilium-6rngw\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " pod="kube-system/cilium-6rngw" Feb 13 16:16:11.618859 kubelet[1944]: I0213 16:16:11.617895 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b7ed8b2b-f233-4f45-8f22-0043ad377d0a-kube-proxy\") pod \"kube-proxy-g7mcs\" (UID: \"b7ed8b2b-f233-4f45-8f22-0043ad377d0a\") " pod="kube-system/kube-proxy-g7mcs" Feb 13 16:16:11.618859 kubelet[1944]: I0213 16:16:11.617937 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-etc-cni-netd\") pod \"cilium-6rngw\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " pod="kube-system/cilium-6rngw" Feb 13 16:16:11.618859 kubelet[1944]: I0213 16:16:11.617967 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-host-proc-sys-net\") pod \"cilium-6rngw\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " pod="kube-system/cilium-6rngw" Feb 13 16:16:11.618859 kubelet[1944]: I0213 16:16:11.617987 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-bpf-maps\") pod \"cilium-6rngw\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " pod="kube-system/cilium-6rngw" Feb 13 16:16:11.618859 kubelet[1944]: I0213 16:16:11.618006 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-cilium-cgroup\") pod \"cilium-6rngw\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " pod="kube-system/cilium-6rngw" Feb 13 16:16:11.618859 kubelet[1944]: I0213 16:16:11.618057 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-lib-modules\") pod \"cilium-6rngw\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " pod="kube-system/cilium-6rngw" Feb 13 16:16:11.619185 kubelet[1944]: I0213 16:16:11.618077 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-host-proc-sys-kernel\") pod \"cilium-6rngw\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " pod="kube-system/cilium-6rngw" Feb 13 16:16:11.619185 kubelet[1944]: I0213 16:16:11.618097 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rcgw\" (UniqueName: \"kubernetes.io/projected/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-kube-api-access-4rcgw\") pod \"cilium-6rngw\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " pod="kube-system/cilium-6rngw" Feb 13 16:16:11.619185 kubelet[1944]: I0213 16:16:11.618125 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vskj5\" (UniqueName: \"kubernetes.io/projected/b7ed8b2b-f233-4f45-8f22-0043ad377d0a-kube-api-access-vskj5\") pod \"kube-proxy-g7mcs\" (UID: \"b7ed8b2b-f233-4f45-8f22-0043ad377d0a\") " pod="kube-system/kube-proxy-g7mcs" Feb 13 16:16:11.619185 kubelet[1944]: I0213 16:16:11.618145 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-hostproc\") pod \"cilium-6rngw\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " pod="kube-system/cilium-6rngw" Feb 13 16:16:11.619185 kubelet[1944]: I0213 16:16:11.618167 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-cni-path\") pod \"cilium-6rngw\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " pod="kube-system/cilium-6rngw" Feb 13 16:16:11.875174 kubelet[1944]: E0213 16:16:11.869839 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:16:11.875368 containerd[1602]: time="2025-02-13T16:16:11.871557104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g7mcs,Uid:b7ed8b2b-f233-4f45-8f22-0043ad377d0a,Namespace:kube-system,Attempt:0,}" Feb 13 16:16:11.884224 kubelet[1944]: E0213 16:16:11.884070 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:16:11.892520 containerd[1602]: time="2025-02-13T16:16:11.886032800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6rngw,Uid:1e64bf96-dde5-4fa1-91f9-c0463e99a98a,Namespace:kube-system,Attempt:0,}" Feb 13 16:16:12.560808 kubelet[1944]: E0213 16:16:12.560716 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:13.561224 kubelet[1944]: E0213 16:16:13.561125 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:14.572100 kubelet[1944]: E0213 16:16:14.571868 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:15.572698 kubelet[1944]: E0213 16:16:15.572625 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:16.573455 kubelet[1944]: E0213 16:16:16.573367 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:16.994539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3723289800.mount: Deactivated successfully. Feb 13 16:16:17.025594 containerd[1602]: time="2025-02-13T16:16:17.024446296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:16:17.028441 containerd[1602]: time="2025-02-13T16:16:17.028263834Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 16:16:17.032739 containerd[1602]: time="2025-02-13T16:16:17.032203877Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:16:17.036098 containerd[1602]: time="2025-02-13T16:16:17.034355538Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 16:16:17.036098 containerd[1602]: time="2025-02-13T16:16:17.034536454Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:16:17.041411 containerd[1602]: time="2025-02-13T16:16:17.041354895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:16:17.047424 containerd[1602]: time="2025-02-13T16:16:17.047340644Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 5.155342833s" Feb 13 16:16:17.049612 containerd[1602]: time="2025-02-13T16:16:17.049549852Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 5.177808972s" Feb 13 16:16:17.329087 containerd[1602]: time="2025-02-13T16:16:17.328495080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:16:17.329087 containerd[1602]: time="2025-02-13T16:16:17.328591157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:16:17.329087 containerd[1602]: time="2025-02-13T16:16:17.328619431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:16:17.329087 containerd[1602]: time="2025-02-13T16:16:17.328816721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:16:17.361534 containerd[1602]: time="2025-02-13T16:16:17.360988426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:16:17.361534 containerd[1602]: time="2025-02-13T16:16:17.361065073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:16:17.361534 containerd[1602]: time="2025-02-13T16:16:17.361078383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:16:17.361888 containerd[1602]: time="2025-02-13T16:16:17.361218806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:16:17.581298 kubelet[1944]: E0213 16:16:17.579561 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:17.622953 containerd[1602]: time="2025-02-13T16:16:17.622146665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6rngw,Uid:1e64bf96-dde5-4fa1-91f9-c0463e99a98a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c30ecd68de7ba3bd95cf3c3b73571ce84996e2b05ccabee9fcb353b667fc9f2\"" Feb 13 16:16:17.626708 containerd[1602]: time="2025-02-13T16:16:17.624462277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g7mcs,Uid:b7ed8b2b-f233-4f45-8f22-0043ad377d0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa37083f3e6647a9cde809c25a357427a1e57092731bfe4fbcae5ceaaecfdf4d\"" Feb 13 16:16:17.627030 kubelet[1944]: E0213 16:16:17.626607 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:16:17.628331 kubelet[1944]: E0213 16:16:17.628044 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:16:17.632371 containerd[1602]: time="2025-02-13T16:16:17.632289415Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 16:16:18.050229 systemd-resolved[1479]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Feb 13 16:16:18.584196 kubelet[1944]: E0213 16:16:18.583979 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:19.585177 kubelet[1944]: E0213 16:16:19.585096 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:20.585837 kubelet[1944]: E0213 16:16:20.585682 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:21.122227 systemd-resolved[1479]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Feb 13 16:16:21.585981 kubelet[1944]: E0213 16:16:21.585904 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:22.586217 kubelet[1944]: E0213 16:16:22.586157 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:23.586697 kubelet[1944]: E0213 16:16:23.586368 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:24.587255 kubelet[1944]: E0213 16:16:24.587204 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:25.588554 kubelet[1944]: E0213 16:16:25.588315 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:25.844315 kernel: hrtimer: interrupt took 6270329 ns Feb 13 16:16:26.598591 kubelet[1944]: E0213 16:16:26.598536 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:27.600358 kubelet[1944]: E0213 16:16:27.600293 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:27.751480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2164575668.mount: Deactivated successfully. Feb 13 16:16:28.600711 kubelet[1944]: E0213 16:16:28.600615 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:29.547151 kubelet[1944]: E0213 16:16:29.547098 1944 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:29.611320 kubelet[1944]: E0213 16:16:29.610742 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:30.611824 kubelet[1944]: E0213 16:16:30.611770 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:31.613798 kubelet[1944]: E0213 16:16:31.613725 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:32.185413 containerd[1602]: time="2025-02-13T16:16:32.184225993Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:16:32.187256 containerd[1602]: time="2025-02-13T16:16:32.187161605Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 16:16:32.189956 containerd[1602]: time="2025-02-13T16:16:32.189809293Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:16:32.194663 containerd[1602]: time="2025-02-13T16:16:32.194327497Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.561973393s" Feb 13 16:16:32.194663 containerd[1602]: time="2025-02-13T16:16:32.194399556Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 16:16:32.197337 containerd[1602]: time="2025-02-13T16:16:32.197025138Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 16:16:32.199804 containerd[1602]: time="2025-02-13T16:16:32.199552424Z" level=info msg="CreateContainer within sandbox \"7c30ecd68de7ba3bd95cf3c3b73571ce84996e2b05ccabee9fcb353b667fc9f2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 16:16:32.202465 systemd-resolved[1479]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Feb 13 16:16:32.234295 containerd[1602]: time="2025-02-13T16:16:32.234213545Z" level=info msg="CreateContainer within sandbox \"7c30ecd68de7ba3bd95cf3c3b73571ce84996e2b05ccabee9fcb353b667fc9f2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"77f141df09f2f798f6f0512b590e0edf66dfb129449a565d439e25cd6fbea4a1\"" Feb 13 16:16:32.235326 containerd[1602]: time="2025-02-13T16:16:32.235278933Z" level=info msg="StartContainer for \"77f141df09f2f798f6f0512b590e0edf66dfb129449a565d439e25cd6fbea4a1\"" Feb 13 16:16:32.301635 systemd[1]: run-containerd-runc-k8s.io-77f141df09f2f798f6f0512b590e0edf66dfb129449a565d439e25cd6fbea4a1-runc.fslfF1.mount: Deactivated successfully. Feb 13 16:16:32.369493 containerd[1602]: time="2025-02-13T16:16:32.369436795Z" level=info msg="StartContainer for \"77f141df09f2f798f6f0512b590e0edf66dfb129449a565d439e25cd6fbea4a1\" returns successfully" Feb 13 16:16:32.552423 containerd[1602]: time="2025-02-13T16:16:32.552028108Z" level=info msg="shim disconnected" id=77f141df09f2f798f6f0512b590e0edf66dfb129449a565d439e25cd6fbea4a1 namespace=k8s.io Feb 13 16:16:32.552423 containerd[1602]: time="2025-02-13T16:16:32.552121337Z" level=warning msg="cleaning up after shim disconnected" id=77f141df09f2f798f6f0512b590e0edf66dfb129449a565d439e25cd6fbea4a1 namespace=k8s.io Feb 13 16:16:32.552423 containerd[1602]: time="2025-02-13T16:16:32.552135679Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:16:32.614398 kubelet[1944]: E0213 16:16:32.614316 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:32.959109 kubelet[1944]: E0213 16:16:32.956843 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:16:32.962736 containerd[1602]: time="2025-02-13T16:16:32.962669871Z" level=info msg="CreateContainer within sandbox \"7c30ecd68de7ba3bd95cf3c3b73571ce84996e2b05ccabee9fcb353b667fc9f2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 16:16:33.089319 containerd[1602]: time="2025-02-13T16:16:33.089252380Z" level=info msg="CreateContainer within sandbox \"7c30ecd68de7ba3bd95cf3c3b73571ce84996e2b05ccabee9fcb353b667fc9f2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"89f1734194c40dd1bbede860ce2e9e7b96e2ea33de6dabbdfb15e6ddb1fc4be1\"" Feb 13 16:16:33.094075 containerd[1602]: time="2025-02-13T16:16:33.093999790Z" level=info msg="StartContainer for \"89f1734194c40dd1bbede860ce2e9e7b96e2ea33de6dabbdfb15e6ddb1fc4be1\"" Feb 13 16:16:33.223720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77f141df09f2f798f6f0512b590e0edf66dfb129449a565d439e25cd6fbea4a1-rootfs.mount: Deactivated successfully. Feb 13 16:16:33.311314 containerd[1602]: time="2025-02-13T16:16:33.311241053Z" level=info msg="StartContainer for \"89f1734194c40dd1bbede860ce2e9e7b96e2ea33de6dabbdfb15e6ddb1fc4be1\" returns successfully" Feb 13 16:16:33.332567 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 16:16:33.334322 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:16:33.334455 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:16:33.350490 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:16:33.409617 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:16:33.425989 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89f1734194c40dd1bbede860ce2e9e7b96e2ea33de6dabbdfb15e6ddb1fc4be1-rootfs.mount: Deactivated successfully. Feb 13 16:16:33.456626 containerd[1602]: time="2025-02-13T16:16:33.456138109Z" level=info msg="shim disconnected" id=89f1734194c40dd1bbede860ce2e9e7b96e2ea33de6dabbdfb15e6ddb1fc4be1 namespace=k8s.io Feb 13 16:16:33.456626 containerd[1602]: time="2025-02-13T16:16:33.456245319Z" level=warning msg="cleaning up after shim disconnected" id=89f1734194c40dd1bbede860ce2e9e7b96e2ea33de6dabbdfb15e6ddb1fc4be1 namespace=k8s.io Feb 13 16:16:33.456626 containerd[1602]: time="2025-02-13T16:16:33.456259339Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:16:33.506247 containerd[1602]: time="2025-02-13T16:16:33.506038820Z" level=warning msg="cleanup warnings time=\"2025-02-13T16:16:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 16:16:33.615317 kubelet[1944]: E0213 16:16:33.615251 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:33.962632 kubelet[1944]: E0213 16:16:33.962564 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:16:33.967296 containerd[1602]: time="2025-02-13T16:16:33.967029876Z" level=info msg="CreateContainer within sandbox \"7c30ecd68de7ba3bd95cf3c3b73571ce84996e2b05ccabee9fcb353b667fc9f2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 16:16:34.016488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3431725620.mount: Deactivated successfully. Feb 13 16:16:34.026096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1200113686.mount: Deactivated successfully. Feb 13 16:16:34.030901 containerd[1602]: time="2025-02-13T16:16:34.030818660Z" level=info msg="CreateContainer within sandbox \"7c30ecd68de7ba3bd95cf3c3b73571ce84996e2b05ccabee9fcb353b667fc9f2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c6dfe5d0ee9c90dc1ec066c388f99d7910b03d9c3dfad89807a4bfef9c299ab1\"" Feb 13 16:16:34.034155 containerd[1602]: time="2025-02-13T16:16:34.031877265Z" level=info msg="StartContainer for \"c6dfe5d0ee9c90dc1ec066c388f99d7910b03d9c3dfad89807a4bfef9c299ab1\"" Feb 13 16:16:34.185018 containerd[1602]: time="2025-02-13T16:16:34.184197305Z" level=info msg="StartContainer for \"c6dfe5d0ee9c90dc1ec066c388f99d7910b03d9c3dfad89807a4bfef9c299ab1\" returns successfully" Feb 13 16:16:34.217554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1221634933.mount: Deactivated successfully. Feb 13 16:16:34.239430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6dfe5d0ee9c90dc1ec066c388f99d7910b03d9c3dfad89807a4bfef9c299ab1-rootfs.mount: Deactivated successfully. Feb 13 16:16:34.330467 containerd[1602]: time="2025-02-13T16:16:34.330170181Z" level=info msg="shim disconnected" id=c6dfe5d0ee9c90dc1ec066c388f99d7910b03d9c3dfad89807a4bfef9c299ab1 namespace=k8s.io Feb 13 16:16:34.330467 containerd[1602]: time="2025-02-13T16:16:34.330255476Z" level=warning msg="cleaning up after shim disconnected" id=c6dfe5d0ee9c90dc1ec066c388f99d7910b03d9c3dfad89807a4bfef9c299ab1 namespace=k8s.io Feb 13 16:16:34.330467 containerd[1602]: time="2025-02-13T16:16:34.330270653Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:16:34.615844 kubelet[1944]: E0213 16:16:34.615797 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:34.974029 kubelet[1944]: E0213 16:16:34.971209 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:16:34.977518 containerd[1602]: time="2025-02-13T16:16:34.977451185Z" level=info msg="CreateContainer within sandbox \"7c30ecd68de7ba3bd95cf3c3b73571ce84996e2b05ccabee9fcb353b667fc9f2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 16:16:35.023203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3292879830.mount: Deactivated successfully. Feb 13 16:16:35.035251 containerd[1602]: time="2025-02-13T16:16:35.035184917Z" level=info msg="CreateContainer within sandbox \"7c30ecd68de7ba3bd95cf3c3b73571ce84996e2b05ccabee9fcb353b667fc9f2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f67cf79b7ff78659d98c30ff1b04a12cf50c941102a2ad02194b1bc334cc27f5\"" Feb 13 16:16:35.038999 containerd[1602]: time="2025-02-13T16:16:35.038460986Z" level=info msg="StartContainer for \"f67cf79b7ff78659d98c30ff1b04a12cf50c941102a2ad02194b1bc334cc27f5\"" Feb 13 16:16:35.042068 containerd[1602]: time="2025-02-13T16:16:35.041497646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:16:35.046001 containerd[1602]: time="2025-02-13T16:16:35.044111757Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=28620592" Feb 13 16:16:35.048245 containerd[1602]: time="2025-02-13T16:16:35.048184394Z" level=info msg="ImageCreate event name:\"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:16:35.055582 containerd[1602]: time="2025-02-13T16:16:35.055508551Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"28619611\" in 2.858430913s" Feb 13 16:16:35.056228 containerd[1602]: time="2025-02-13T16:16:35.055957959Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\"" Feb 13 16:16:35.056508 containerd[1602]: time="2025-02-13T16:16:35.055859101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:16:35.062377 containerd[1602]: time="2025-02-13T16:16:35.062315054Z" level=info msg="CreateContainer within sandbox \"fa37083f3e6647a9cde809c25a357427a1e57092731bfe4fbcae5ceaaecfdf4d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 16:16:35.099486 containerd[1602]: time="2025-02-13T16:16:35.099280910Z" level=info msg="CreateContainer within sandbox \"fa37083f3e6647a9cde809c25a357427a1e57092731bfe4fbcae5ceaaecfdf4d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a789caaf3f99ffb1c54202ebd92d232fb431902f69523b45706d207daf66c481\"" Feb 13 16:16:35.101060 containerd[1602]: time="2025-02-13T16:16:35.101010002Z" level=info msg="StartContainer for \"a789caaf3f99ffb1c54202ebd92d232fb431902f69523b45706d207daf66c481\"" Feb 13 16:16:35.169310 containerd[1602]: time="2025-02-13T16:16:35.169020347Z" level=info msg="StartContainer for \"f67cf79b7ff78659d98c30ff1b04a12cf50c941102a2ad02194b1bc334cc27f5\" returns successfully" Feb 13 16:16:35.223407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f67cf79b7ff78659d98c30ff1b04a12cf50c941102a2ad02194b1bc334cc27f5-rootfs.mount: Deactivated successfully. Feb 13 16:16:35.282066 containerd[1602]: time="2025-02-13T16:16:35.281633847Z" level=info msg="shim disconnected" id=f67cf79b7ff78659d98c30ff1b04a12cf50c941102a2ad02194b1bc334cc27f5 namespace=k8s.io Feb 13 16:16:35.282066 containerd[1602]: time="2025-02-13T16:16:35.281719696Z" level=warning msg="cleaning up after shim disconnected" id=f67cf79b7ff78659d98c30ff1b04a12cf50c941102a2ad02194b1bc334cc27f5 namespace=k8s.io Feb 13 16:16:35.282066 containerd[1602]: time="2025-02-13T16:16:35.281734028Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:16:35.313320 containerd[1602]: time="2025-02-13T16:16:35.310700255Z" level=info msg="StartContainer for \"a789caaf3f99ffb1c54202ebd92d232fb431902f69523b45706d207daf66c481\" returns successfully" Feb 13 16:16:35.335588 containerd[1602]: time="2025-02-13T16:16:35.335497085Z" level=warning msg="cleanup warnings time=\"2025-02-13T16:16:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 16:16:35.617285 kubelet[1944]: E0213 16:16:35.617206 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:36.001163 kubelet[1944]: E0213 16:16:35.996798 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:16:36.002448 containerd[1602]: time="2025-02-13T16:16:36.001903806Z" level=info msg="CreateContainer within sandbox \"7c30ecd68de7ba3bd95cf3c3b73571ce84996e2b05ccabee9fcb353b667fc9f2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 16:16:36.003371 kubelet[1944]: E0213 16:16:36.003258 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:16:36.055158 containerd[1602]: time="2025-02-13T16:16:36.052769996Z" level=info msg="CreateContainer within sandbox \"7c30ecd68de7ba3bd95cf3c3b73571ce84996e2b05ccabee9fcb353b667fc9f2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e97082b0858c9a695ce2e831acdd098ddcb6957ca4e81e6c30fc458db6c33c7c\"" Feb 13 16:16:36.055158 containerd[1602]: time="2025-02-13T16:16:36.054866672Z" level=info msg="StartContainer for \"e97082b0858c9a695ce2e831acdd098ddcb6957ca4e81e6c30fc458db6c33c7c\"" Feb 13 16:16:36.085963 kubelet[1944]: I0213 16:16:36.083778 1944 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-g7mcs" podStartSLOduration=9.655362344 podStartE2EDuration="27.083683619s" podCreationTimestamp="2025-02-13 16:16:09 +0000 UTC" firstStartedPulling="2025-02-13 16:16:17.629529386 +0000 UTC m=+8.815792382" lastFinishedPulling="2025-02-13 16:16:35.057850665 +0000 UTC m=+26.244113657" observedRunningTime="2025-02-13 16:16:36.083360397 +0000 UTC m=+27.269623398" watchObservedRunningTime="2025-02-13 16:16:36.083683619 +0000 UTC m=+27.269946617" Feb 13 16:16:36.190584 containerd[1602]: time="2025-02-13T16:16:36.190375776Z" level=info msg="StartContainer for \"e97082b0858c9a695ce2e831acdd098ddcb6957ca4e81e6c30fc458db6c33c7c\" returns successfully" Feb 13 16:16:36.415036 kubelet[1944]: I0213 16:16:36.413979 1944 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 16:16:36.618505 kubelet[1944]: E0213 16:16:36.618375 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:36.816077 kernel: Initializing XFRM netlink socket Feb 13 16:16:37.018098 kubelet[1944]: E0213 16:16:37.017345 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:16:37.020137 kubelet[1944]: E0213 16:16:37.020097 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:16:37.065826 kubelet[1944]: I0213 16:16:37.065757 1944 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6rngw" podStartSLOduration=13.499692403 podStartE2EDuration="28.065685489s" podCreationTimestamp="2025-02-13 16:16:09 +0000 UTC" firstStartedPulling="2025-02-13 16:16:17.629424056 +0000 UTC m=+8.815687055" lastFinishedPulling="2025-02-13 16:16:32.195417162 +0000 UTC m=+23.381680141" observedRunningTime="2025-02-13 16:16:37.061058113 +0000 UTC m=+28.247321121" watchObservedRunningTime="2025-02-13 16:16:37.065685489 +0000 UTC m=+28.251948480" Feb 13 16:16:37.620876 kubelet[1944]: E0213 16:16:37.620810 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:38.021051 kubelet[1944]: E0213 16:16:38.020882 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:16:38.217054 systemd-networkd[1224]: cilium_host: Link UP Feb 13 16:16:38.218313 systemd-networkd[1224]: cilium_net: Link UP Feb 13 16:16:38.218650 systemd-networkd[1224]: cilium_net: Gained carrier Feb 13 16:16:38.218792 systemd-networkd[1224]: cilium_host: Gained carrier Feb 13 16:16:38.218903 systemd-networkd[1224]: cilium_net: Gained IPv6LL Feb 13 16:16:38.219069 systemd-networkd[1224]: cilium_host: Gained IPv6LL Feb 13 16:16:38.449176 systemd-networkd[1224]: cilium_vxlan: Link UP Feb 13 16:16:38.449191 systemd-networkd[1224]: cilium_vxlan: Gained carrier Feb 13 16:16:38.622098 kubelet[1944]: E0213 16:16:38.622025 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:38.900988 kernel: NET: Registered PF_ALG protocol family Feb 13 16:16:39.023654 kubelet[1944]: E0213 16:16:39.023529 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:16:39.554361 systemd-networkd[1224]: cilium_vxlan: Gained IPv6LL Feb 13 16:16:39.623655 kubelet[1944]: E0213 16:16:39.623547 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:39.839835 kubelet[1944]: I0213 16:16:39.837769 1944 topology_manager.go:215] "Topology Admit Handler" podUID="9d8819d0-cb5b-4267-ad9f-0c43c369d17d" podNamespace="default" podName="nginx-deployment-6d5f899847-q2v56" Feb 13 16:16:39.855487 kubelet[1944]: I0213 16:16:39.855437 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdpsd\" (UniqueName: \"kubernetes.io/projected/9d8819d0-cb5b-4267-ad9f-0c43c369d17d-kube-api-access-jdpsd\") pod \"nginx-deployment-6d5f899847-q2v56\" (UID: \"9d8819d0-cb5b-4267-ad9f-0c43c369d17d\") " pod="default/nginx-deployment-6d5f899847-q2v56" Feb 13 16:16:40.027057 kubelet[1944]: E0213 16:16:40.026721 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:16:40.073467 systemd-networkd[1224]: lxc_health: Link UP Feb 13 16:16:40.084036 systemd-networkd[1224]: lxc_health: Gained carrier Feb 13 16:16:40.149999 containerd[1602]: time="2025-02-13T16:16:40.149194528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-q2v56,Uid:9d8819d0-cb5b-4267-ad9f-0c43c369d17d,Namespace:default,Attempt:0,}" Feb 13 16:16:40.625007 kubelet[1944]: E0213 16:16:40.624276 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:40.820748 update_engine[1569]: I20250213 16:16:40.820562 1569 update_attempter.cc:509] Updating boot flags... Feb 13 16:16:40.870304 systemd-networkd[1224]: lxc2e603e42eb2f: Link UP Feb 13 16:16:40.891566 kernel: eth0: renamed from tmpef3b6 Feb 13 16:16:40.904429 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2451) Feb 13 16:16:40.919411 systemd-networkd[1224]: lxc2e603e42eb2f: Gained carrier Feb 13 16:16:41.240993 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2645) Feb 13 16:16:41.420429 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2645) Feb 13 16:16:41.624997 kubelet[1944]: E0213 16:16:41.624905 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:41.890966 kubelet[1944]: E0213 16:16:41.887852 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:16:41.922241 systemd-networkd[1224]: lxc_health: Gained IPv6LL Feb 13 16:16:42.036230 kubelet[1944]: E0213 16:16:42.035715 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:16:42.052000 systemd-networkd[1224]: lxc2e603e42eb2f: Gained IPv6LL Feb 13 16:16:42.625785 kubelet[1944]: E0213 16:16:42.625696 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:43.626611 kubelet[1944]: E0213 16:16:43.626508 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:44.627449 kubelet[1944]: E0213 16:16:44.627365 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:45.628847 kubelet[1944]: E0213 16:16:45.628735 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:46.631366 kubelet[1944]: E0213 16:16:46.631301 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:47.631801 kubelet[1944]: E0213 16:16:47.631717 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:48.659178 kubelet[1944]: E0213 16:16:48.632046 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:49.546935 kubelet[1944]: E0213 16:16:49.546559 1944 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:49.647942 kubelet[1944]: E0213 16:16:49.647851 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:49.945177 containerd[1602]: time="2025-02-13T16:16:49.942743651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:16:49.945177 containerd[1602]: time="2025-02-13T16:16:49.943194040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:16:49.945177 containerd[1602]: time="2025-02-13T16:16:49.943229266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:16:49.945177 containerd[1602]: time="2025-02-13T16:16:49.943482719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:16:50.117866 containerd[1602]: time="2025-02-13T16:16:50.117261128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-q2v56,Uid:9d8819d0-cb5b-4267-ad9f-0c43c369d17d,Namespace:default,Attempt:0,} returns sandbox id \"ef3b64bc051b982286479ca1f801b30e9acb0ef629d6c7cdcaa8b46360a6a787\"" Feb 13 16:16:50.123102 containerd[1602]: time="2025-02-13T16:16:50.122559627Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 16:16:50.649794 kubelet[1944]: E0213 16:16:50.649722 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:51.650872 kubelet[1944]: E0213 16:16:51.650800 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:52.653342 kubelet[1944]: E0213 16:16:52.653262 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:53.655800 kubelet[1944]: E0213 16:16:53.655746 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:54.588980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2925534500.mount: Deactivated successfully. Feb 13 16:16:54.659319 kubelet[1944]: E0213 16:16:54.658379 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:55.659514 kubelet[1944]: E0213 16:16:55.659172 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:56.659661 kubelet[1944]: E0213 16:16:56.659579 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:57.329317 containerd[1602]: time="2025-02-13T16:16:57.329243694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:16:57.334225 containerd[1602]: time="2025-02-13T16:16:57.334037375Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 16:16:57.336201 containerd[1602]: time="2025-02-13T16:16:57.336083419Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:16:57.347390 containerd[1602]: time="2025-02-13T16:16:57.346267251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:16:57.348431 containerd[1602]: time="2025-02-13T16:16:57.348370738Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 7.225724784s" Feb 13 16:16:57.348431 containerd[1602]: time="2025-02-13T16:16:57.348426541Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 16:16:57.356022 containerd[1602]: time="2025-02-13T16:16:57.355857096Z" level=info msg="CreateContainer within sandbox \"ef3b64bc051b982286479ca1f801b30e9acb0ef629d6c7cdcaa8b46360a6a787\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 16:16:57.376585 containerd[1602]: time="2025-02-13T16:16:57.375703874Z" level=info msg="CreateContainer within sandbox \"ef3b64bc051b982286479ca1f801b30e9acb0ef629d6c7cdcaa8b46360a6a787\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"dfccdd2c29b9af4b11a39757a67e63f4bb0540e4f515c252f97955d07e224977\"" Feb 13 16:16:57.377305 containerd[1602]: time="2025-02-13T16:16:57.377182381Z" level=info msg="StartContainer for \"dfccdd2c29b9af4b11a39757a67e63f4bb0540e4f515c252f97955d07e224977\"" Feb 13 16:16:57.575240 containerd[1602]: time="2025-02-13T16:16:57.572329234Z" level=info msg="StartContainer for \"dfccdd2c29b9af4b11a39757a67e63f4bb0540e4f515c252f97955d07e224977\" returns successfully" Feb 13 16:16:57.667681 kubelet[1944]: E0213 16:16:57.660428 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:58.171156 kubelet[1944]: I0213 16:16:58.169142 1944 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-q2v56" podStartSLOduration=11.941390277 podStartE2EDuration="19.169069479s" podCreationTimestamp="2025-02-13 16:16:39 +0000 UTC" firstStartedPulling="2025-02-13 16:16:50.121465247 +0000 UTC m=+41.307728244" lastFinishedPulling="2025-02-13 16:16:57.349144463 +0000 UTC m=+48.535407446" observedRunningTime="2025-02-13 16:16:58.166857976 +0000 UTC m=+49.353120971" watchObservedRunningTime="2025-02-13 16:16:58.169069479 +0000 UTC m=+49.355332478" Feb 13 16:16:58.660833 kubelet[1944]: E0213 16:16:58.660739 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:16:59.661705 kubelet[1944]: E0213 16:16:59.661570 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:00.663974 kubelet[1944]: E0213 16:17:00.663727 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:01.664417 kubelet[1944]: E0213 16:17:01.664339 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:02.665118 kubelet[1944]: E0213 16:17:02.665012 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:03.665937 kubelet[1944]: E0213 16:17:03.665828 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:04.667311 kubelet[1944]: E0213 16:17:04.667224 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:05.668388 kubelet[1944]: E0213 16:17:05.668319 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:06.091241 kubelet[1944]: I0213 16:17:06.089641 1944 topology_manager.go:215] "Topology Admit Handler" podUID="5d4a6e4c-fe65-4393-a77c-386e8b29b6a6" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 16:17:06.141432 kubelet[1944]: I0213 16:17:06.141355 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5d4a6e4c-fe65-4393-a77c-386e8b29b6a6-data\") pod \"nfs-server-provisioner-0\" (UID: \"5d4a6e4c-fe65-4393-a77c-386e8b29b6a6\") " pod="default/nfs-server-provisioner-0" Feb 13 16:17:06.142441 kubelet[1944]: I0213 16:17:06.142405 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4j9v\" (UniqueName: \"kubernetes.io/projected/5d4a6e4c-fe65-4393-a77c-386e8b29b6a6-kube-api-access-b4j9v\") pod \"nfs-server-provisioner-0\" (UID: \"5d4a6e4c-fe65-4393-a77c-386e8b29b6a6\") " pod="default/nfs-server-provisioner-0" Feb 13 16:17:06.398675 containerd[1602]: time="2025-02-13T16:17:06.397720527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5d4a6e4c-fe65-4393-a77c-386e8b29b6a6,Namespace:default,Attempt:0,}" Feb 13 16:17:06.502445 systemd-networkd[1224]: lxc27b2a62f0e24: Link UP Feb 13 16:17:06.512060 kernel: eth0: renamed from tmp78699 Feb 13 16:17:06.529114 systemd-networkd[1224]: lxc27b2a62f0e24: Gained carrier Feb 13 16:17:06.669513 kubelet[1944]: E0213 16:17:06.669352 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:06.856952 containerd[1602]: time="2025-02-13T16:17:06.854737523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:17:06.856952 containerd[1602]: time="2025-02-13T16:17:06.855505713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:17:06.856952 containerd[1602]: time="2025-02-13T16:17:06.855598324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:17:06.856952 containerd[1602]: time="2025-02-13T16:17:06.855863762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:17:07.012091 containerd[1602]: time="2025-02-13T16:17:07.011693188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5d4a6e4c-fe65-4393-a77c-386e8b29b6a6,Namespace:default,Attempt:0,} returns sandbox id \"78699c0b10523d051e334a82a7085c993dd97a3b5c82fce8c62e988d68a43c3c\"" Feb 13 16:17:07.015744 containerd[1602]: time="2025-02-13T16:17:07.015700002Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 16:17:07.673078 kubelet[1944]: E0213 16:17:07.672978 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:07.855419 systemd-networkd[1224]: lxc27b2a62f0e24: Gained IPv6LL Feb 13 16:17:08.675764 kubelet[1944]: E0213 16:17:08.674846 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:09.547122 kubelet[1944]: E0213 16:17:09.547007 1944 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:09.677461 kubelet[1944]: E0213 16:17:09.675248 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:10.682528 kubelet[1944]: E0213 16:17:10.682363 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:11.423200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3331258951.mount: Deactivated successfully. Feb 13 16:17:11.687940 kubelet[1944]: E0213 16:17:11.687741 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:12.699816 kubelet[1944]: E0213 16:17:12.699713 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:13.701035 kubelet[1944]: E0213 16:17:13.700883 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:14.701362 kubelet[1944]: E0213 16:17:14.701312 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:15.704302 kubelet[1944]: E0213 16:17:15.703887 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:16.504906 containerd[1602]: time="2025-02-13T16:17:16.504283551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:17:16.509325 containerd[1602]: time="2025-02-13T16:17:16.509065821Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Feb 13 16:17:16.517227 containerd[1602]: time="2025-02-13T16:17:16.516403547Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:17:16.520561 containerd[1602]: time="2025-02-13T16:17:16.520161295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:17:16.527236 containerd[1602]: time="2025-02-13T16:17:16.524500653Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 9.508492899s" Feb 13 16:17:16.527236 containerd[1602]: time="2025-02-13T16:17:16.524567060Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 16:17:16.528150 containerd[1602]: time="2025-02-13T16:17:16.528049641Z" level=info msg="CreateContainer within sandbox \"78699c0b10523d051e334a82a7085c993dd97a3b5c82fce8c62e988d68a43c3c\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 16:17:16.578664 containerd[1602]: time="2025-02-13T16:17:16.578507531Z" level=info msg="CreateContainer within sandbox \"78699c0b10523d051e334a82a7085c993dd97a3b5c82fce8c62e988d68a43c3c\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"52c383637473f83a82c8056e032a06a70c162b011109708efde093b3ecf84a9d\"" Feb 13 16:17:16.584033 containerd[1602]: time="2025-02-13T16:17:16.581234353Z" level=info msg="StartContainer for \"52c383637473f83a82c8056e032a06a70c162b011109708efde093b3ecf84a9d\"" Feb 13 16:17:16.673494 systemd[1]: run-containerd-runc-k8s.io-52c383637473f83a82c8056e032a06a70c162b011109708efde093b3ecf84a9d-runc.6Ab4Kb.mount: Deactivated successfully. Feb 13 16:17:16.715119 kubelet[1944]: E0213 16:17:16.715055 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:16.781783 containerd[1602]: time="2025-02-13T16:17:16.780503098Z" level=info msg="StartContainer for \"52c383637473f83a82c8056e032a06a70c162b011109708efde093b3ecf84a9d\" returns successfully" Feb 13 16:17:17.279776 kubelet[1944]: I0213 16:17:17.279483 1944 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.768710027 podStartE2EDuration="11.279421206s" podCreationTimestamp="2025-02-13 16:17:06 +0000 UTC" firstStartedPulling="2025-02-13 16:17:07.014395348 +0000 UTC m=+58.200658325" lastFinishedPulling="2025-02-13 16:17:16.525106527 +0000 UTC m=+67.711369504" observedRunningTime="2025-02-13 16:17:17.276480633 +0000 UTC m=+68.462743637" watchObservedRunningTime="2025-02-13 16:17:17.279421206 +0000 UTC m=+68.465684210" Feb 13 16:17:17.719383 kubelet[1944]: E0213 16:17:17.719176 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:18.719692 kubelet[1944]: E0213 16:17:18.719605 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:19.720780 kubelet[1944]: E0213 16:17:19.720613 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:20.721794 kubelet[1944]: E0213 16:17:20.721717 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:21.722472 kubelet[1944]: E0213 16:17:21.722349 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:22.722799 kubelet[1944]: E0213 16:17:22.722688 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:23.723286 kubelet[1944]: E0213 16:17:23.723184 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:24.723508 kubelet[1944]: E0213 16:17:24.723412 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:25.724389 kubelet[1944]: E0213 16:17:25.724293 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:26.725461 kubelet[1944]: E0213 16:17:26.725379 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:26.774030 kubelet[1944]: I0213 16:17:26.772802 1944 topology_manager.go:215] "Topology Admit Handler" podUID="32456ca8-4d08-49db-8af5-46c237a98d67" podNamespace="default" podName="test-pod-1" Feb 13 16:17:26.883350 kubelet[1944]: I0213 16:17:26.883276 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbdg8\" (UniqueName: \"kubernetes.io/projected/32456ca8-4d08-49db-8af5-46c237a98d67-kube-api-access-tbdg8\") pod \"test-pod-1\" (UID: \"32456ca8-4d08-49db-8af5-46c237a98d67\") " pod="default/test-pod-1" Feb 13 16:17:26.883768 kubelet[1944]: I0213 16:17:26.883732 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ccf2276a-7b16-41aa-a6e3-8cbf02251c24\" (UniqueName: \"kubernetes.io/nfs/32456ca8-4d08-49db-8af5-46c237a98d67-pvc-ccf2276a-7b16-41aa-a6e3-8cbf02251c24\") pod \"test-pod-1\" (UID: \"32456ca8-4d08-49db-8af5-46c237a98d67\") " pod="default/test-pod-1" Feb 13 16:17:27.049535 kernel: FS-Cache: Loaded Feb 13 16:17:27.110486 systemd[1]: Started sshd@7-137.184.191.138:22-194.0.234.37:60064.service - OpenSSH per-connection server daemon (194.0.234.37:60064). Feb 13 16:17:27.165217 kernel: RPC: Registered named UNIX socket transport module. Feb 13 16:17:27.165373 kernel: RPC: Registered udp transport module. Feb 13 16:17:27.165411 kernel: RPC: Registered tcp transport module. Feb 13 16:17:27.165441 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 16:17:27.165468 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 16:17:27.611620 kernel: NFS: Registering the id_resolver key type Feb 13 16:17:27.611785 kernel: Key type id_resolver registered Feb 13 16:17:27.611820 kernel: Key type id_legacy registered Feb 13 16:17:27.670257 nfsidmap[3334]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.1-9-8a8a313a66' Feb 13 16:17:27.679605 nfsidmap[3335]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.1-9-8a8a313a66' Feb 13 16:17:27.726538 kubelet[1944]: E0213 16:17:27.726467 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:27.979227 containerd[1602]: time="2025-02-13T16:17:27.979017418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:32456ca8-4d08-49db-8af5-46c237a98d67,Namespace:default,Attempt:0,}" Feb 13 16:17:28.046402 systemd-networkd[1224]: lxc495a874e139f: Link UP Feb 13 16:17:28.058959 kernel: eth0: renamed from tmp17a7e Feb 13 16:17:28.081358 systemd-networkd[1224]: lxc495a874e139f: Gained carrier Feb 13 16:17:28.420889 containerd[1602]: time="2025-02-13T16:17:28.420039304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:17:28.420889 containerd[1602]: time="2025-02-13T16:17:28.420132155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:17:28.420889 containerd[1602]: time="2025-02-13T16:17:28.420159558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:17:28.420889 containerd[1602]: time="2025-02-13T16:17:28.420290021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:17:28.554120 containerd[1602]: time="2025-02-13T16:17:28.553887067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:32456ca8-4d08-49db-8af5-46c237a98d67,Namespace:default,Attempt:0,} returns sandbox id \"17a7e188af58c84d6acaf059527f40361e215260fd61bc98e7cf701614b68b74\"" Feb 13 16:17:28.557290 containerd[1602]: time="2025-02-13T16:17:28.556866010Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 16:17:28.632640 sshd[3322]: Connection closed by authenticating user root 194.0.234.37 port 60064 [preauth] Feb 13 16:17:28.636657 systemd[1]: sshd@7-137.184.191.138:22-194.0.234.37:60064.service: Deactivated successfully. Feb 13 16:17:28.727868 kubelet[1944]: E0213 16:17:28.727648 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:29.026159 containerd[1602]: time="2025-02-13T16:17:29.025965563Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:17:29.038209 containerd[1602]: time="2025-02-13T16:17:29.038076374Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 16:17:29.046017 containerd[1602]: time="2025-02-13T16:17:29.045945138Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 488.949197ms" Feb 13 16:17:29.047604 containerd[1602]: time="2025-02-13T16:17:29.047159476Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 16:17:29.068995 containerd[1602]: time="2025-02-13T16:17:29.067459915Z" level=info msg="CreateContainer within sandbox \"17a7e188af58c84d6acaf059527f40361e215260fd61bc98e7cf701614b68b74\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 16:17:29.098591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2788157270.mount: Deactivated successfully. Feb 13 16:17:29.103057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1920934374.mount: Deactivated successfully. Feb 13 16:17:29.132023 containerd[1602]: time="2025-02-13T16:17:29.131358029Z" level=info msg="CreateContainer within sandbox \"17a7e188af58c84d6acaf059527f40361e215260fd61bc98e7cf701614b68b74\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"2b6ee89e70eca84ebd6da5d55671942fb3260d9bd5fe46336ac4192b6062bb34\"" Feb 13 16:17:29.133401 containerd[1602]: time="2025-02-13T16:17:29.133144906Z" level=info msg="StartContainer for \"2b6ee89e70eca84ebd6da5d55671942fb3260d9bd5fe46336ac4192b6062bb34\"" Feb 13 16:17:29.247995 containerd[1602]: time="2025-02-13T16:17:29.247343798Z" level=info msg="StartContainer for \"2b6ee89e70eca84ebd6da5d55671942fb3260d9bd5fe46336ac4192b6062bb34\" returns successfully" Feb 13 16:17:29.346276 systemd-networkd[1224]: lxc495a874e139f: Gained IPv6LL Feb 13 16:17:29.547907 kubelet[1944]: E0213 16:17:29.547695 1944 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:29.731629 kubelet[1944]: E0213 16:17:29.731346 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:30.732710 kubelet[1944]: E0213 16:17:30.732343 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:31.733555 kubelet[1944]: E0213 16:17:31.733037 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:32.737961 kubelet[1944]: E0213 16:17:32.737870 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:33.738852 kubelet[1944]: E0213 16:17:33.738767 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:34.740751 kubelet[1944]: E0213 16:17:34.740588 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:35.745088 kubelet[1944]: E0213 16:17:35.745007 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:36.746110 kubelet[1944]: E0213 16:17:36.746017 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:37.747725 kubelet[1944]: E0213 16:17:37.747600 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:37.874683 kubelet[1944]: I0213 16:17:37.874514 1944 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=31.375311859 podStartE2EDuration="31.87444748s" podCreationTimestamp="2025-02-13 16:17:06 +0000 UTC" firstStartedPulling="2025-02-13 16:17:28.556040687 +0000 UTC m=+79.742303671" lastFinishedPulling="2025-02-13 16:17:29.055176304 +0000 UTC m=+80.241439292" observedRunningTime="2025-02-13 16:17:29.329088994 +0000 UTC m=+80.515352004" watchObservedRunningTime="2025-02-13 16:17:37.87444748 +0000 UTC m=+89.060710479" Feb 13 16:17:37.957464 containerd[1602]: time="2025-02-13T16:17:37.957393424Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 16:17:37.980744 containerd[1602]: time="2025-02-13T16:17:37.980353429Z" level=info msg="StopContainer for \"e97082b0858c9a695ce2e831acdd098ddcb6957ca4e81e6c30fc458db6c33c7c\" with timeout 2 (s)" Feb 13 16:17:37.981804 containerd[1602]: time="2025-02-13T16:17:37.981349645Z" level=info msg="Stop container \"e97082b0858c9a695ce2e831acdd098ddcb6957ca4e81e6c30fc458db6c33c7c\" with signal terminated" Feb 13 16:17:38.000577 systemd-networkd[1224]: lxc_health: Link DOWN Feb 13 16:17:38.000593 systemd-networkd[1224]: lxc_health: Lost carrier Feb 13 16:17:38.130593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e97082b0858c9a695ce2e831acdd098ddcb6957ca4e81e6c30fc458db6c33c7c-rootfs.mount: Deactivated successfully. Feb 13 16:17:38.183015 containerd[1602]: time="2025-02-13T16:17:38.182756852Z" level=info msg="shim disconnected" id=e97082b0858c9a695ce2e831acdd098ddcb6957ca4e81e6c30fc458db6c33c7c namespace=k8s.io Feb 13 16:17:38.183015 containerd[1602]: time="2025-02-13T16:17:38.182845834Z" level=warning msg="cleaning up after shim disconnected" id=e97082b0858c9a695ce2e831acdd098ddcb6957ca4e81e6c30fc458db6c33c7c namespace=k8s.io Feb 13 16:17:38.183015 containerd[1602]: time="2025-02-13T16:17:38.182860201Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:17:38.217962 containerd[1602]: time="2025-02-13T16:17:38.217708906Z" level=info msg="StopContainer for \"e97082b0858c9a695ce2e831acdd098ddcb6957ca4e81e6c30fc458db6c33c7c\" returns successfully" Feb 13 16:17:38.219037 containerd[1602]: time="2025-02-13T16:17:38.218674257Z" level=info msg="StopPodSandbox for \"7c30ecd68de7ba3bd95cf3c3b73571ce84996e2b05ccabee9fcb353b667fc9f2\"" Feb 13 16:17:38.219037 containerd[1602]: time="2025-02-13T16:17:38.218749444Z" level=info msg="Container to stop \"e97082b0858c9a695ce2e831acdd098ddcb6957ca4e81e6c30fc458db6c33c7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:17:38.219037 containerd[1602]: time="2025-02-13T16:17:38.218807748Z" level=info msg="Container to stop \"77f141df09f2f798f6f0512b590e0edf66dfb129449a565d439e25cd6fbea4a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:17:38.219037 containerd[1602]: time="2025-02-13T16:17:38.218824619Z" level=info msg="Container to stop \"c6dfe5d0ee9c90dc1ec066c388f99d7910b03d9c3dfad89807a4bfef9c299ab1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:17:38.219037 containerd[1602]: time="2025-02-13T16:17:38.218840430Z" level=info msg="Container to stop \"f67cf79b7ff78659d98c30ff1b04a12cf50c941102a2ad02194b1bc334cc27f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:17:38.219037 containerd[1602]: time="2025-02-13T16:17:38.218856118Z" level=info msg="Container to stop \"89f1734194c40dd1bbede860ce2e9e7b96e2ea33de6dabbdfb15e6ddb1fc4be1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:17:38.227623 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c30ecd68de7ba3bd95cf3c3b73571ce84996e2b05ccabee9fcb353b667fc9f2-shm.mount: Deactivated successfully. Feb 13 16:17:38.293985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c30ecd68de7ba3bd95cf3c3b73571ce84996e2b05ccabee9fcb353b667fc9f2-rootfs.mount: Deactivated successfully. Feb 13 16:17:38.335022 containerd[1602]: time="2025-02-13T16:17:38.326618231Z" level=info msg="shim disconnected" id=7c30ecd68de7ba3bd95cf3c3b73571ce84996e2b05ccabee9fcb353b667fc9f2 namespace=k8s.io Feb 13 16:17:38.335022 containerd[1602]: time="2025-02-13T16:17:38.326715008Z" level=warning msg="cleaning up after shim disconnected" id=7c30ecd68de7ba3bd95cf3c3b73571ce84996e2b05ccabee9fcb353b667fc9f2 namespace=k8s.io Feb 13 16:17:38.335022 containerd[1602]: time="2025-02-13T16:17:38.326728964Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:17:38.416421 containerd[1602]: time="2025-02-13T16:17:38.416329063Z" level=info msg="TearDown network for sandbox \"7c30ecd68de7ba3bd95cf3c3b73571ce84996e2b05ccabee9fcb353b667fc9f2\" successfully" Feb 13 16:17:38.416421 containerd[1602]: time="2025-02-13T16:17:38.416401753Z" level=info msg="StopPodSandbox for \"7c30ecd68de7ba3bd95cf3c3b73571ce84996e2b05ccabee9fcb353b667fc9f2\" returns successfully" Feb 13 16:17:38.544393 kubelet[1944]: I0213 16:17:38.532773 1944 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-cilium-config-path\") pod \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " Feb 13 16:17:38.544393 kubelet[1944]: I0213 16:17:38.532839 1944 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-xtables-lock\") pod \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " Feb 13 16:17:38.544393 kubelet[1944]: I0213 16:17:38.532868 1944 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-bpf-maps\") pod \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " Feb 13 16:17:38.544393 kubelet[1944]: I0213 16:17:38.532897 1944 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-etc-cni-netd\") pod \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " Feb 13 16:17:38.544393 kubelet[1944]: I0213 16:17:38.532954 1944 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-cilium-cgroup\") pod \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " Feb 13 16:17:38.544393 kubelet[1944]: I0213 16:17:38.532987 1944 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rcgw\" (UniqueName: \"kubernetes.io/projected/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-kube-api-access-4rcgw\") pod \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " Feb 13 16:17:38.544852 kubelet[1944]: I0213 16:17:38.533015 1944 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-lib-modules\") pod \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " Feb 13 16:17:38.544852 kubelet[1944]: I0213 16:17:38.533040 1944 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-cni-path\") pod \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " Feb 13 16:17:38.544852 kubelet[1944]: I0213 16:17:38.533067 1944 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-hubble-tls\") pod \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " Feb 13 16:17:38.544852 kubelet[1944]: I0213 16:17:38.533102 1944 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-clustermesh-secrets\") pod \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " Feb 13 16:17:38.544852 kubelet[1944]: I0213 16:17:38.533128 1944 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-host-proc-sys-net\") pod \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " Feb 13 16:17:38.544852 kubelet[1944]: I0213 16:17:38.533154 1944 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-cilium-run\") pod \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " Feb 13 16:17:38.545181 kubelet[1944]: I0213 16:17:38.533179 1944 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-host-proc-sys-kernel\") pod \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " Feb 13 16:17:38.545181 kubelet[1944]: I0213 16:17:38.533205 1944 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-hostproc\") pod \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\" (UID: \"1e64bf96-dde5-4fa1-91f9-c0463e99a98a\") " Feb 13 16:17:38.545181 kubelet[1944]: I0213 16:17:38.533313 1944 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-hostproc" (OuterVolumeSpecName: "hostproc") pod "1e64bf96-dde5-4fa1-91f9-c0463e99a98a" (UID: "1e64bf96-dde5-4fa1-91f9-c0463e99a98a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:17:38.546778 kubelet[1944]: I0213 16:17:38.545608 1944 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1e64bf96-dde5-4fa1-91f9-c0463e99a98a" (UID: "1e64bf96-dde5-4fa1-91f9-c0463e99a98a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 16:17:38.546778 kubelet[1944]: I0213 16:17:38.545686 1944 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1e64bf96-dde5-4fa1-91f9-c0463e99a98a" (UID: "1e64bf96-dde5-4fa1-91f9-c0463e99a98a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:17:38.546778 kubelet[1944]: I0213 16:17:38.545734 1944 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-cni-path" (OuterVolumeSpecName: "cni-path") pod "1e64bf96-dde5-4fa1-91f9-c0463e99a98a" (UID: "1e64bf96-dde5-4fa1-91f9-c0463e99a98a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:17:38.546778 kubelet[1944]: I0213 16:17:38.545752 1944 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1e64bf96-dde5-4fa1-91f9-c0463e99a98a" (UID: "1e64bf96-dde5-4fa1-91f9-c0463e99a98a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:17:38.546778 kubelet[1944]: I0213 16:17:38.545780 1944 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1e64bf96-dde5-4fa1-91f9-c0463e99a98a" (UID: "1e64bf96-dde5-4fa1-91f9-c0463e99a98a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:17:38.549207 kubelet[1944]: I0213 16:17:38.545805 1944 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1e64bf96-dde5-4fa1-91f9-c0463e99a98a" (UID: "1e64bf96-dde5-4fa1-91f9-c0463e99a98a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:17:38.549207 kubelet[1944]: I0213 16:17:38.545830 1944 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1e64bf96-dde5-4fa1-91f9-c0463e99a98a" (UID: "1e64bf96-dde5-4fa1-91f9-c0463e99a98a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:17:38.549207 kubelet[1944]: I0213 16:17:38.548893 1944 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1e64bf96-dde5-4fa1-91f9-c0463e99a98a" (UID: "1e64bf96-dde5-4fa1-91f9-c0463e99a98a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:17:38.549478 kubelet[1944]: I0213 16:17:38.548990 1944 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1e64bf96-dde5-4fa1-91f9-c0463e99a98a" (UID: "1e64bf96-dde5-4fa1-91f9-c0463e99a98a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:17:38.549478 kubelet[1944]: I0213 16:17:38.549153 1944 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1e64bf96-dde5-4fa1-91f9-c0463e99a98a" (UID: "1e64bf96-dde5-4fa1-91f9-c0463e99a98a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:17:38.556991 kubelet[1944]: I0213 16:17:38.555155 1944 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1e64bf96-dde5-4fa1-91f9-c0463e99a98a" (UID: "1e64bf96-dde5-4fa1-91f9-c0463e99a98a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 16:17:38.556991 kubelet[1944]: I0213 16:17:38.555289 1944 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1e64bf96-dde5-4fa1-91f9-c0463e99a98a" (UID: "1e64bf96-dde5-4fa1-91f9-c0463e99a98a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 16:17:38.564865 kubelet[1944]: I0213 16:17:38.559266 1944 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-kube-api-access-4rcgw" (OuterVolumeSpecName: "kube-api-access-4rcgw") pod "1e64bf96-dde5-4fa1-91f9-c0463e99a98a" (UID: "1e64bf96-dde5-4fa1-91f9-c0463e99a98a"). InnerVolumeSpecName "kube-api-access-4rcgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 16:17:38.559967 systemd[1]: var-lib-kubelet-pods-1e64bf96\x2ddde5\x2d4fa1\x2d91f9\x2dc0463e99a98a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 16:17:38.636013 kubelet[1944]: I0213 16:17:38.635548 1944 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-hostproc\") on node \"137.184.191.138\" DevicePath \"\"" Feb 13 16:17:38.636013 kubelet[1944]: I0213 16:17:38.635607 1944 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-cilium-run\") on node \"137.184.191.138\" DevicePath \"\"" Feb 13 16:17:38.636013 kubelet[1944]: I0213 16:17:38.635632 1944 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-host-proc-sys-kernel\") on node \"137.184.191.138\" DevicePath \"\"" Feb 13 16:17:38.636013 kubelet[1944]: I0213 16:17:38.635658 1944 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-cilium-config-path\") on node \"137.184.191.138\" DevicePath \"\"" Feb 13 16:17:38.636013 kubelet[1944]: I0213 16:17:38.635678 1944 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-xtables-lock\") on node \"137.184.191.138\" DevicePath \"\"" Feb 13 16:17:38.636013 kubelet[1944]: I0213 16:17:38.635697 1944 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-bpf-maps\") on node \"137.184.191.138\" DevicePath \"\"" Feb 13 16:17:38.636013 kubelet[1944]: I0213 16:17:38.635716 1944 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4rcgw\" (UniqueName: \"kubernetes.io/projected/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-kube-api-access-4rcgw\") on node \"137.184.191.138\" DevicePath \"\"" Feb 13 16:17:38.636013 kubelet[1944]: I0213 16:17:38.635733 1944 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-etc-cni-netd\") on node \"137.184.191.138\" DevicePath \"\"" Feb 13 16:17:38.637198 kubelet[1944]: I0213 16:17:38.635749 1944 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-cilium-cgroup\") on node \"137.184.191.138\" DevicePath \"\"" Feb 13 16:17:38.637198 kubelet[1944]: I0213 16:17:38.635767 1944 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-clustermesh-secrets\") on node \"137.184.191.138\" DevicePath \"\"" Feb 13 16:17:38.637198 kubelet[1944]: I0213 16:17:38.635786 1944 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-host-proc-sys-net\") on node \"137.184.191.138\" DevicePath \"\"" Feb 13 16:17:38.637198 kubelet[1944]: I0213 16:17:38.635802 1944 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-lib-modules\") on node \"137.184.191.138\" DevicePath \"\"" Feb 13 16:17:38.637198 kubelet[1944]: I0213 16:17:38.635817 1944 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-cni-path\") on node \"137.184.191.138\" DevicePath \"\"" Feb 13 16:17:38.637198 kubelet[1944]: I0213 16:17:38.635832 1944 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e64bf96-dde5-4fa1-91f9-c0463e99a98a-hubble-tls\") on node \"137.184.191.138\" DevicePath \"\"" Feb 13 16:17:38.748373 kubelet[1944]: E0213 16:17:38.748235 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:38.930650 systemd[1]: var-lib-kubelet-pods-1e64bf96\x2ddde5\x2d4fa1\x2d91f9\x2dc0463e99a98a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4rcgw.mount: Deactivated successfully. Feb 13 16:17:38.930975 systemd[1]: var-lib-kubelet-pods-1e64bf96\x2ddde5\x2d4fa1\x2d91f9\x2dc0463e99a98a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 16:17:39.404021 kubelet[1944]: I0213 16:17:39.402138 1944 scope.go:117] "RemoveContainer" containerID="e97082b0858c9a695ce2e831acdd098ddcb6957ca4e81e6c30fc458db6c33c7c" Feb 13 16:17:39.410427 containerd[1602]: time="2025-02-13T16:17:39.410010870Z" level=info msg="RemoveContainer for \"e97082b0858c9a695ce2e831acdd098ddcb6957ca4e81e6c30fc458db6c33c7c\"" Feb 13 16:17:39.416738 containerd[1602]: time="2025-02-13T16:17:39.416438791Z" level=info msg="RemoveContainer for \"e97082b0858c9a695ce2e831acdd098ddcb6957ca4e81e6c30fc458db6c33c7c\" returns successfully" Feb 13 16:17:39.417396 kubelet[1944]: I0213 16:17:39.417220 1944 scope.go:117] "RemoveContainer" containerID="f67cf79b7ff78659d98c30ff1b04a12cf50c941102a2ad02194b1bc334cc27f5" Feb 13 16:17:39.420964 containerd[1602]: time="2025-02-13T16:17:39.420452691Z" level=info msg="RemoveContainer for \"f67cf79b7ff78659d98c30ff1b04a12cf50c941102a2ad02194b1bc334cc27f5\"" Feb 13 16:17:39.432445 containerd[1602]: time="2025-02-13T16:17:39.429900935Z" level=info msg="RemoveContainer for \"f67cf79b7ff78659d98c30ff1b04a12cf50c941102a2ad02194b1bc334cc27f5\" returns successfully" Feb 13 16:17:39.440072 kubelet[1944]: I0213 16:17:39.432758 1944 scope.go:117] "RemoveContainer" containerID="c6dfe5d0ee9c90dc1ec066c388f99d7910b03d9c3dfad89807a4bfef9c299ab1" Feb 13 16:17:39.446346 containerd[1602]: time="2025-02-13T16:17:39.444265105Z" level=info msg="RemoveContainer for \"c6dfe5d0ee9c90dc1ec066c388f99d7910b03d9c3dfad89807a4bfef9c299ab1\"" Feb 13 16:17:39.462077 containerd[1602]: time="2025-02-13T16:17:39.461939869Z" level=info msg="RemoveContainer for \"c6dfe5d0ee9c90dc1ec066c388f99d7910b03d9c3dfad89807a4bfef9c299ab1\" returns successfully" Feb 13 16:17:39.462629 kubelet[1944]: I0213 16:17:39.462468 1944 scope.go:117] "RemoveContainer" containerID="89f1734194c40dd1bbede860ce2e9e7b96e2ea33de6dabbdfb15e6ddb1fc4be1" Feb 13 16:17:39.470502 containerd[1602]: time="2025-02-13T16:17:39.469780064Z" level=info msg="RemoveContainer for \"89f1734194c40dd1bbede860ce2e9e7b96e2ea33de6dabbdfb15e6ddb1fc4be1\"" Feb 13 16:17:39.489435 containerd[1602]: time="2025-02-13T16:17:39.489114366Z" level=info msg="RemoveContainer for \"89f1734194c40dd1bbede860ce2e9e7b96e2ea33de6dabbdfb15e6ddb1fc4be1\" returns successfully" Feb 13 16:17:39.490747 kubelet[1944]: I0213 16:17:39.490140 1944 scope.go:117] "RemoveContainer" containerID="77f141df09f2f798f6f0512b590e0edf66dfb129449a565d439e25cd6fbea4a1" Feb 13 16:17:39.495001 containerd[1602]: time="2025-02-13T16:17:39.493510062Z" level=info msg="RemoveContainer for \"77f141df09f2f798f6f0512b590e0edf66dfb129449a565d439e25cd6fbea4a1\"" Feb 13 16:17:39.504673 containerd[1602]: time="2025-02-13T16:17:39.504414498Z" level=info msg="RemoveContainer for \"77f141df09f2f798f6f0512b590e0edf66dfb129449a565d439e25cd6fbea4a1\" returns successfully" Feb 13 16:17:39.748896 kubelet[1944]: E0213 16:17:39.748641 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:39.773047 kubelet[1944]: E0213 16:17:39.772967 1944 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 16:17:39.775556 kubelet[1944]: I0213 16:17:39.774523 1944 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1e64bf96-dde5-4fa1-91f9-c0463e99a98a" path="/var/lib/kubelet/pods/1e64bf96-dde5-4fa1-91f9-c0463e99a98a/volumes" Feb 13 16:17:40.749241 kubelet[1944]: E0213 16:17:40.749186 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:41.447786 kubelet[1944]: I0213 16:17:41.443373 1944 topology_manager.go:215] "Topology Admit Handler" podUID="95312267-bf03-4ddc-afe9-ef6ebcb1938b" podNamespace="kube-system" podName="cilium-operator-5cc964979-qwhsr" Feb 13 16:17:41.447786 kubelet[1944]: E0213 16:17:41.443479 1944 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e64bf96-dde5-4fa1-91f9-c0463e99a98a" containerName="mount-cgroup" Feb 13 16:17:41.447786 kubelet[1944]: E0213 16:17:41.443501 1944 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e64bf96-dde5-4fa1-91f9-c0463e99a98a" containerName="apply-sysctl-overwrites" Feb 13 16:17:41.447786 kubelet[1944]: E0213 16:17:41.443515 1944 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e64bf96-dde5-4fa1-91f9-c0463e99a98a" containerName="cilium-agent" Feb 13 16:17:41.447786 kubelet[1944]: E0213 16:17:41.443526 1944 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e64bf96-dde5-4fa1-91f9-c0463e99a98a" containerName="mount-bpf-fs" Feb 13 16:17:41.447786 kubelet[1944]: E0213 16:17:41.443539 1944 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e64bf96-dde5-4fa1-91f9-c0463e99a98a" containerName="clean-cilium-state" Feb 13 16:17:41.447786 kubelet[1944]: I0213 16:17:41.443578 1944 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e64bf96-dde5-4fa1-91f9-c0463e99a98a" containerName="cilium-agent" Feb 13 16:17:41.503698 kubelet[1944]: I0213 16:17:41.500651 1944 topology_manager.go:215] "Topology Admit Handler" podUID="3de210ef-3fdf-401e-89cf-9efb7f71cb77" podNamespace="kube-system" podName="cilium-jg9xh" Feb 13 16:17:41.533960 kubelet[1944]: W0213 16:17:41.531322 1944 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:137.184.191.138" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '137.184.191.138' and this object Feb 13 16:17:41.533960 kubelet[1944]: E0213 16:17:41.531393 1944 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:137.184.191.138" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '137.184.191.138' and this object Feb 13 16:17:41.566784 kubelet[1944]: I0213 16:17:41.566721 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3de210ef-3fdf-401e-89cf-9efb7f71cb77-cilium-config-path\") pod \"cilium-jg9xh\" (UID: \"3de210ef-3fdf-401e-89cf-9efb7f71cb77\") " pod="kube-system/cilium-jg9xh" Feb 13 16:17:41.567301 kubelet[1944]: I0213 16:17:41.567264 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3de210ef-3fdf-401e-89cf-9efb7f71cb77-hubble-tls\") pod \"cilium-jg9xh\" (UID: \"3de210ef-3fdf-401e-89cf-9efb7f71cb77\") " pod="kube-system/cilium-jg9xh" Feb 13 16:17:41.570342 kubelet[1944]: I0213 16:17:41.567605 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3de210ef-3fdf-401e-89cf-9efb7f71cb77-host-proc-sys-net\") pod \"cilium-jg9xh\" (UID: \"3de210ef-3fdf-401e-89cf-9efb7f71cb77\") " pod="kube-system/cilium-jg9xh" Feb 13 16:17:41.570342 kubelet[1944]: I0213 16:17:41.567662 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3de210ef-3fdf-401e-89cf-9efb7f71cb77-cilium-run\") pod \"cilium-jg9xh\" (UID: \"3de210ef-3fdf-401e-89cf-9efb7f71cb77\") " pod="kube-system/cilium-jg9xh" Feb 13 16:17:41.570342 kubelet[1944]: I0213 16:17:41.567700 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3de210ef-3fdf-401e-89cf-9efb7f71cb77-bpf-maps\") pod \"cilium-jg9xh\" (UID: \"3de210ef-3fdf-401e-89cf-9efb7f71cb77\") " pod="kube-system/cilium-jg9xh" Feb 13 16:17:41.570342 kubelet[1944]: I0213 16:17:41.567723 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3de210ef-3fdf-401e-89cf-9efb7f71cb77-cni-path\") pod \"cilium-jg9xh\" (UID: \"3de210ef-3fdf-401e-89cf-9efb7f71cb77\") " pod="kube-system/cilium-jg9xh" Feb 13 16:17:41.570342 kubelet[1944]: I0213 16:17:41.567745 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3de210ef-3fdf-401e-89cf-9efb7f71cb77-host-proc-sys-kernel\") pod \"cilium-jg9xh\" (UID: \"3de210ef-3fdf-401e-89cf-9efb7f71cb77\") " pod="kube-system/cilium-jg9xh" Feb 13 16:17:41.570342 kubelet[1944]: I0213 16:17:41.567778 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3de210ef-3fdf-401e-89cf-9efb7f71cb77-cilium-cgroup\") pod \"cilium-jg9xh\" (UID: \"3de210ef-3fdf-401e-89cf-9efb7f71cb77\") " pod="kube-system/cilium-jg9xh" Feb 13 16:17:41.570954 kubelet[1944]: I0213 16:17:41.567799 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3de210ef-3fdf-401e-89cf-9efb7f71cb77-hostproc\") pod \"cilium-jg9xh\" (UID: \"3de210ef-3fdf-401e-89cf-9efb7f71cb77\") " pod="kube-system/cilium-jg9xh" Feb 13 16:17:41.570954 kubelet[1944]: I0213 16:17:41.567827 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/95312267-bf03-4ddc-afe9-ef6ebcb1938b-cilium-config-path\") pod \"cilium-operator-5cc964979-qwhsr\" (UID: \"95312267-bf03-4ddc-afe9-ef6ebcb1938b\") " pod="kube-system/cilium-operator-5cc964979-qwhsr" Feb 13 16:17:41.570954 kubelet[1944]: I0213 16:17:41.567846 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrch5\" (UniqueName: \"kubernetes.io/projected/95312267-bf03-4ddc-afe9-ef6ebcb1938b-kube-api-access-mrch5\") pod \"cilium-operator-5cc964979-qwhsr\" (UID: \"95312267-bf03-4ddc-afe9-ef6ebcb1938b\") " pod="kube-system/cilium-operator-5cc964979-qwhsr" Feb 13 16:17:41.570954 kubelet[1944]: I0213 16:17:41.567864 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3de210ef-3fdf-401e-89cf-9efb7f71cb77-lib-modules\") pod \"cilium-jg9xh\" (UID: \"3de210ef-3fdf-401e-89cf-9efb7f71cb77\") " pod="kube-system/cilium-jg9xh" Feb 13 16:17:41.570954 kubelet[1944]: I0213 16:17:41.567891 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl64m\" (UniqueName: \"kubernetes.io/projected/3de210ef-3fdf-401e-89cf-9efb7f71cb77-kube-api-access-nl64m\") pod \"cilium-jg9xh\" (UID: \"3de210ef-3fdf-401e-89cf-9efb7f71cb77\") " pod="kube-system/cilium-jg9xh" Feb 13 16:17:41.571171 kubelet[1944]: I0213 16:17:41.567949 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3de210ef-3fdf-401e-89cf-9efb7f71cb77-xtables-lock\") pod \"cilium-jg9xh\" (UID: \"3de210ef-3fdf-401e-89cf-9efb7f71cb77\") " pod="kube-system/cilium-jg9xh" Feb 13 16:17:41.571171 kubelet[1944]: I0213 16:17:41.567974 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3de210ef-3fdf-401e-89cf-9efb7f71cb77-cilium-ipsec-secrets\") pod \"cilium-jg9xh\" (UID: \"3de210ef-3fdf-401e-89cf-9efb7f71cb77\") " pod="kube-system/cilium-jg9xh" Feb 13 16:17:41.571171 kubelet[1944]: I0213 16:17:41.567999 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3de210ef-3fdf-401e-89cf-9efb7f71cb77-etc-cni-netd\") pod \"cilium-jg9xh\" (UID: \"3de210ef-3fdf-401e-89cf-9efb7f71cb77\") " pod="kube-system/cilium-jg9xh" Feb 13 16:17:41.571171 kubelet[1944]: I0213 16:17:41.568030 1944 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3de210ef-3fdf-401e-89cf-9efb7f71cb77-clustermesh-secrets\") pod \"cilium-jg9xh\" (UID: \"3de210ef-3fdf-401e-89cf-9efb7f71cb77\") " pod="kube-system/cilium-jg9xh" Feb 13 16:17:41.749784 kubelet[1944]: E0213 16:17:41.747713 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:17:41.750388 containerd[1602]: time="2025-02-13T16:17:41.749077232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-qwhsr,Uid:95312267-bf03-4ddc-afe9-ef6ebcb1938b,Namespace:kube-system,Attempt:0,}" Feb 13 16:17:41.756303 kubelet[1944]: E0213 16:17:41.753884 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:41.796271 kubelet[1944]: I0213 16:17:41.794890 1944 setters.go:568] "Node became not ready" node="137.184.191.138" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T16:17:41Z","lastTransitionTime":"2025-02-13T16:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 16:17:41.816352 containerd[1602]: time="2025-02-13T16:17:41.815822561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:17:41.816352 containerd[1602]: time="2025-02-13T16:17:41.815941113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:17:41.816352 containerd[1602]: time="2025-02-13T16:17:41.815968861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:17:41.816352 containerd[1602]: time="2025-02-13T16:17:41.816120389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:17:41.928238 containerd[1602]: time="2025-02-13T16:17:41.928154978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-qwhsr,Uid:95312267-bf03-4ddc-afe9-ef6ebcb1938b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5e75bdc8e23718114d62dcc77907fcfa2f8cd4e2bdec8020c1221a81a5fa4e5\"" Feb 13 16:17:41.929740 kubelet[1944]: E0213 16:17:41.929711 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:17:41.931788 containerd[1602]: time="2025-02-13T16:17:41.931666811Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 16:17:42.671886 kubelet[1944]: E0213 16:17:42.671684 1944 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 13 16:17:42.671886 kubelet[1944]: E0213 16:17:42.671775 1944 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-jg9xh: failed to sync secret cache: timed out waiting for the condition Feb 13 16:17:42.672843 kubelet[1944]: E0213 16:17:42.672321 1944 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3de210ef-3fdf-401e-89cf-9efb7f71cb77-hubble-tls podName:3de210ef-3fdf-401e-89cf-9efb7f71cb77 nodeName:}" failed. No retries permitted until 2025-02-13 16:17:43.172263618 +0000 UTC m=+94.358526621 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/3de210ef-3fdf-401e-89cf-9efb7f71cb77-hubble-tls") pod "cilium-jg9xh" (UID: "3de210ef-3fdf-401e-89cf-9efb7f71cb77") : failed to sync secret cache: timed out waiting for the condition Feb 13 16:17:42.756308 kubelet[1944]: E0213 16:17:42.756216 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:43.307971 kubelet[1944]: E0213 16:17:43.307415 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:17:43.309233 containerd[1602]: time="2025-02-13T16:17:43.309169804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jg9xh,Uid:3de210ef-3fdf-401e-89cf-9efb7f71cb77,Namespace:kube-system,Attempt:0,}" Feb 13 16:17:43.392236 containerd[1602]: time="2025-02-13T16:17:43.391684261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:17:43.392236 containerd[1602]: time="2025-02-13T16:17:43.391833152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:17:43.392236 containerd[1602]: time="2025-02-13T16:17:43.391859781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:17:43.395646 containerd[1602]: time="2025-02-13T16:17:43.393655700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:17:43.539739 containerd[1602]: time="2025-02-13T16:17:43.539459077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jg9xh,Uid:3de210ef-3fdf-401e-89cf-9efb7f71cb77,Namespace:kube-system,Attempt:0,} returns sandbox id \"eebbdcedac36c4309e0dd0f69a3941d6a0c1f4ac3f0ab7d205508fbe9468282f\"" Feb 13 16:17:43.547083 kubelet[1944]: E0213 16:17:43.546065 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:17:43.550905 containerd[1602]: time="2025-02-13T16:17:43.550855612Z" level=info msg="CreateContainer within sandbox \"eebbdcedac36c4309e0dd0f69a3941d6a0c1f4ac3f0ab7d205508fbe9468282f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 16:17:43.590179 containerd[1602]: time="2025-02-13T16:17:43.589745760Z" level=info msg="CreateContainer within sandbox \"eebbdcedac36c4309e0dd0f69a3941d6a0c1f4ac3f0ab7d205508fbe9468282f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cb571b95076aa78e2d2539fc5ce113f79bd009910fb0f52cd58dfa21181b34b6\"" Feb 13 16:17:43.592596 containerd[1602]: time="2025-02-13T16:17:43.592556417Z" level=info msg="StartContainer for \"cb571b95076aa78e2d2539fc5ce113f79bd009910fb0f52cd58dfa21181b34b6\"" Feb 13 16:17:43.696316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1377347248.mount: Deactivated successfully. Feb 13 16:17:43.756879 kubelet[1944]: E0213 16:17:43.756807 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:43.825161 containerd[1602]: time="2025-02-13T16:17:43.825075672Z" level=info msg="StartContainer for \"cb571b95076aa78e2d2539fc5ce113f79bd009910fb0f52cd58dfa21181b34b6\" returns successfully" Feb 13 16:17:43.919299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb571b95076aa78e2d2539fc5ce113f79bd009910fb0f52cd58dfa21181b34b6-rootfs.mount: Deactivated successfully. Feb 13 16:17:43.993523 containerd[1602]: time="2025-02-13T16:17:43.993426708Z" level=info msg="shim disconnected" id=cb571b95076aa78e2d2539fc5ce113f79bd009910fb0f52cd58dfa21181b34b6 namespace=k8s.io Feb 13 16:17:43.993523 containerd[1602]: time="2025-02-13T16:17:43.993512519Z" level=warning msg="cleaning up after shim disconnected" id=cb571b95076aa78e2d2539fc5ce113f79bd009910fb0f52cd58dfa21181b34b6 namespace=k8s.io Feb 13 16:17:43.993523 containerd[1602]: time="2025-02-13T16:17:43.993526822Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:17:44.445956 kubelet[1944]: E0213 16:17:44.445821 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:17:44.451641 containerd[1602]: time="2025-02-13T16:17:44.451434167Z" level=info msg="CreateContainer within sandbox \"eebbdcedac36c4309e0dd0f69a3941d6a0c1f4ac3f0ab7d205508fbe9468282f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 16:17:44.487782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount959633655.mount: Deactivated successfully. Feb 13 16:17:44.498939 containerd[1602]: time="2025-02-13T16:17:44.498783374Z" level=info msg="CreateContainer within sandbox \"eebbdcedac36c4309e0dd0f69a3941d6a0c1f4ac3f0ab7d205508fbe9468282f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"07e5bde1c68726cb7fdd838e26b2690a24e967260fcb1e32c6bde81fc7bf0b99\"" Feb 13 16:17:44.500158 containerd[1602]: time="2025-02-13T16:17:44.500071651Z" level=info msg="StartContainer for \"07e5bde1c68726cb7fdd838e26b2690a24e967260fcb1e32c6bde81fc7bf0b99\"" Feb 13 16:17:44.673854 containerd[1602]: time="2025-02-13T16:17:44.672883892Z" level=info msg="StartContainer for \"07e5bde1c68726cb7fdd838e26b2690a24e967260fcb1e32c6bde81fc7bf0b99\" returns successfully" Feb 13 16:17:44.751417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07e5bde1c68726cb7fdd838e26b2690a24e967260fcb1e32c6bde81fc7bf0b99-rootfs.mount: Deactivated successfully. Feb 13 16:17:44.770175 kubelet[1944]: E0213 16:17:44.760863 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:44.776515 kubelet[1944]: E0213 16:17:44.776434 1944 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 16:17:44.829378 containerd[1602]: time="2025-02-13T16:17:44.829299430Z" level=info msg="shim disconnected" id=07e5bde1c68726cb7fdd838e26b2690a24e967260fcb1e32c6bde81fc7bf0b99 namespace=k8s.io Feb 13 16:17:44.829666 containerd[1602]: time="2025-02-13T16:17:44.829645189Z" level=warning msg="cleaning up after shim disconnected" id=07e5bde1c68726cb7fdd838e26b2690a24e967260fcb1e32c6bde81fc7bf0b99 namespace=k8s.io Feb 13 16:17:44.829717 containerd[1602]: time="2025-02-13T16:17:44.829707533Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:17:45.365154 containerd[1602]: time="2025-02-13T16:17:45.365030408Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:17:45.367760 containerd[1602]: time="2025-02-13T16:17:45.367247739Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 16:17:45.373321 containerd[1602]: time="2025-02-13T16:17:45.368352121Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:17:45.381884 containerd[1602]: time="2025-02-13T16:17:45.381809886Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.450027021s" Feb 13 16:17:45.382322 containerd[1602]: time="2025-02-13T16:17:45.382282402Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 16:17:45.395525 containerd[1602]: time="2025-02-13T16:17:45.395447513Z" level=info msg="CreateContainer within sandbox \"a5e75bdc8e23718114d62dcc77907fcfa2f8cd4e2bdec8020c1221a81a5fa4e5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 16:17:45.422205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2137818093.mount: Deactivated successfully. Feb 13 16:17:45.424864 containerd[1602]: time="2025-02-13T16:17:45.424779083Z" level=info msg="CreateContainer within sandbox \"a5e75bdc8e23718114d62dcc77907fcfa2f8cd4e2bdec8020c1221a81a5fa4e5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"68aa00ee2ac301e811614024880ae5df1c75825102d9b3bf0da38099077f41b3\"" Feb 13 16:17:45.426985 containerd[1602]: time="2025-02-13T16:17:45.426803340Z" level=info msg="StartContainer for \"68aa00ee2ac301e811614024880ae5df1c75825102d9b3bf0da38099077f41b3\"" Feb 13 16:17:45.473945 kubelet[1944]: E0213 16:17:45.473313 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:17:45.499653 containerd[1602]: time="2025-02-13T16:17:45.499360493Z" level=info msg="CreateContainer within sandbox \"eebbdcedac36c4309e0dd0f69a3941d6a0c1f4ac3f0ab7d205508fbe9468282f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 16:17:45.556455 containerd[1602]: time="2025-02-13T16:17:45.556204721Z" level=info msg="CreateContainer within sandbox \"eebbdcedac36c4309e0dd0f69a3941d6a0c1f4ac3f0ab7d205508fbe9468282f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9bcf8e357c78a51a002180e4500ed934f52eaa89f73b30e3538609ff1afe936f\"" Feb 13 16:17:45.567750 containerd[1602]: time="2025-02-13T16:17:45.567674528Z" level=info msg="StartContainer for \"9bcf8e357c78a51a002180e4500ed934f52eaa89f73b30e3538609ff1afe936f\"" Feb 13 16:17:45.596424 containerd[1602]: time="2025-02-13T16:17:45.591676483Z" level=info msg="StartContainer for \"68aa00ee2ac301e811614024880ae5df1c75825102d9b3bf0da38099077f41b3\" returns successfully" Feb 13 16:17:45.760956 containerd[1602]: time="2025-02-13T16:17:45.760493101Z" level=info msg="StartContainer for \"9bcf8e357c78a51a002180e4500ed934f52eaa89f73b30e3538609ff1afe936f\" returns successfully" Feb 13 16:17:45.775847 kubelet[1944]: E0213 16:17:45.775451 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:45.891437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bcf8e357c78a51a002180e4500ed934f52eaa89f73b30e3538609ff1afe936f-rootfs.mount: Deactivated successfully. Feb 13 16:17:45.900052 containerd[1602]: time="2025-02-13T16:17:45.897115156Z" level=info msg="shim disconnected" id=9bcf8e357c78a51a002180e4500ed934f52eaa89f73b30e3538609ff1afe936f namespace=k8s.io Feb 13 16:17:45.900052 containerd[1602]: time="2025-02-13T16:17:45.897210148Z" level=warning msg="cleaning up after shim disconnected" id=9bcf8e357c78a51a002180e4500ed934f52eaa89f73b30e3538609ff1afe936f namespace=k8s.io Feb 13 16:17:45.900052 containerd[1602]: time="2025-02-13T16:17:45.897245855Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:17:46.487109 kubelet[1944]: E0213 16:17:46.482793 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:17:46.488985 containerd[1602]: time="2025-02-13T16:17:46.488134312Z" level=info msg="CreateContainer within sandbox \"eebbdcedac36c4309e0dd0f69a3941d6a0c1f4ac3f0ab7d205508fbe9468282f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 16:17:46.489393 kubelet[1944]: E0213 16:17:46.489345 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:17:46.536743 containerd[1602]: time="2025-02-13T16:17:46.536664135Z" level=info msg="CreateContainer within sandbox \"eebbdcedac36c4309e0dd0f69a3941d6a0c1f4ac3f0ab7d205508fbe9468282f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"13f57b1c9d61b3f57617afc5c82dd22fabc1d11a5c0624a89d8b05bad750aa5d\"" Feb 13 16:17:46.546222 containerd[1602]: time="2025-02-13T16:17:46.543686289Z" level=info msg="StartContainer for \"13f57b1c9d61b3f57617afc5c82dd22fabc1d11a5c0624a89d8b05bad750aa5d\"" Feb 13 16:17:46.577295 kubelet[1944]: I0213 16:17:46.577124 1944 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-qwhsr" podStartSLOduration=2.116264632 podStartE2EDuration="5.577040128s" podCreationTimestamp="2025-02-13 16:17:41 +0000 UTC" firstStartedPulling="2025-02-13 16:17:41.931085071 +0000 UTC m=+93.117348050" lastFinishedPulling="2025-02-13 16:17:45.391860556 +0000 UTC m=+96.578123546" observedRunningTime="2025-02-13 16:17:46.576782644 +0000 UTC m=+97.763045648" watchObservedRunningTime="2025-02-13 16:17:46.577040128 +0000 UTC m=+97.763303134" Feb 13 16:17:46.703049 containerd[1602]: time="2025-02-13T16:17:46.702978948Z" level=info msg="StartContainer for \"13f57b1c9d61b3f57617afc5c82dd22fabc1d11a5c0624a89d8b05bad750aa5d\" returns successfully" Feb 13 16:17:46.773721 kubelet[1944]: E0213 16:17:46.773012 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:17:46.778411 kubelet[1944]: E0213 16:17:46.776386 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:46.789324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13f57b1c9d61b3f57617afc5c82dd22fabc1d11a5c0624a89d8b05bad750aa5d-rootfs.mount: Deactivated successfully. Feb 13 16:17:46.803960 containerd[1602]: time="2025-02-13T16:17:46.800743506Z" level=info msg="shim disconnected" id=13f57b1c9d61b3f57617afc5c82dd22fabc1d11a5c0624a89d8b05bad750aa5d namespace=k8s.io Feb 13 16:17:46.803960 containerd[1602]: time="2025-02-13T16:17:46.800832702Z" level=warning msg="cleaning up after shim disconnected" id=13f57b1c9d61b3f57617afc5c82dd22fabc1d11a5c0624a89d8b05bad750aa5d namespace=k8s.io Feb 13 16:17:46.803960 containerd[1602]: time="2025-02-13T16:17:46.800851110Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:17:47.509772 kubelet[1944]: E0213 16:17:47.509728 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:17:47.510630 kubelet[1944]: E0213 16:17:47.510603 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:17:47.514636 containerd[1602]: time="2025-02-13T16:17:47.514556122Z" level=info msg="CreateContainer within sandbox \"eebbdcedac36c4309e0dd0f69a3941d6a0c1f4ac3f0ab7d205508fbe9468282f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 16:17:47.554333 containerd[1602]: time="2025-02-13T16:17:47.554131524Z" level=info msg="CreateContainer within sandbox \"eebbdcedac36c4309e0dd0f69a3941d6a0c1f4ac3f0ab7d205508fbe9468282f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ad2dec287dda283f6081aa22a60006fd724b091e213a893068360d1ceb8e67ac\"" Feb 13 16:17:47.557146 containerd[1602]: time="2025-02-13T16:17:47.555207630Z" level=info msg="StartContainer for \"ad2dec287dda283f6081aa22a60006fd724b091e213a893068360d1ceb8e67ac\"" Feb 13 16:17:47.682874 containerd[1602]: time="2025-02-13T16:17:47.682794713Z" level=info msg="StartContainer for \"ad2dec287dda283f6081aa22a60006fd724b091e213a893068360d1ceb8e67ac\" returns successfully" Feb 13 16:17:47.783072 kubelet[1944]: E0213 16:17:47.779645 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:48.526600 kubelet[1944]: E0213 16:17:48.524879 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:17:48.590139 kubelet[1944]: I0213 16:17:48.589047 1944 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jg9xh" podStartSLOduration=7.588989333 podStartE2EDuration="7.588989333s" podCreationTimestamp="2025-02-13 16:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:17:48.582817277 +0000 UTC m=+99.769080272" watchObservedRunningTime="2025-02-13 16:17:48.588989333 +0000 UTC m=+99.775252334" Feb 13 16:17:48.592570 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 16:17:48.781019 kubelet[1944]: E0213 16:17:48.780735 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:49.529297 kubelet[1944]: E0213 16:17:49.529251 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:17:49.547385 kubelet[1944]: E0213 16:17:49.547270 1944 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:49.782532 kubelet[1944]: E0213 16:17:49.781361 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:50.782462 kubelet[1944]: E0213 16:17:50.782365 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:51.787086 kubelet[1944]: E0213 16:17:51.783671 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:52.784900 kubelet[1944]: E0213 16:17:52.784828 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:53.457430 systemd-networkd[1224]: lxc_health: Link UP Feb 13 16:17:53.461881 systemd-networkd[1224]: lxc_health: Gained carrier Feb 13 16:17:53.787747 kubelet[1944]: E0213 16:17:53.785939 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:53.947656 systemd[1]: run-containerd-runc-k8s.io-ad2dec287dda283f6081aa22a60006fd724b091e213a893068360d1ceb8e67ac-runc.tsEhzq.mount: Deactivated successfully. Feb 13 16:17:54.786580 kubelet[1944]: E0213 16:17:54.786507 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:55.311196 kubelet[1944]: E0213 16:17:55.311137 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:17:55.460660 systemd-networkd[1224]: lxc_health: Gained IPv6LL Feb 13 16:17:55.569896 kubelet[1944]: E0213 16:17:55.566573 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:17:55.807782 kubelet[1944]: E0213 16:17:55.807549 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:56.575134 kubelet[1944]: E0213 16:17:56.575096 1944 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:17:56.808714 kubelet[1944]: E0213 16:17:56.808640 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:57.820193 kubelet[1944]: E0213 16:17:57.809825 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:58.820362 kubelet[1944]: E0213 16:17:58.819364 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:17:59.820465 kubelet[1944]: E0213 16:17:59.820410 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:18:00.822748 kubelet[1944]: E0213 16:18:00.822658 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:18:01.824987 kubelet[1944]: E0213 16:18:01.823504 1944 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"