Nov 13 08:27:05.057529 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 21:10:03 -00 2024 Nov 13 08:27:05.057576 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 13 08:27:05.057592 kernel: BIOS-provided physical RAM map: Nov 13 08:27:05.057601 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 13 08:27:05.057608 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 13 08:27:05.057616 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 13 08:27:05.057625 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Nov 13 08:27:05.057632 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Nov 13 08:27:05.057640 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 13 08:27:05.057651 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 13 08:27:05.057659 kernel: NX (Execute Disable) protection: active Nov 13 08:27:05.057667 kernel: APIC: Static calls initialized Nov 13 08:27:05.057674 kernel: SMBIOS 2.8 present. Nov 13 08:27:05.057682 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 13 08:27:05.057691 kernel: Hypervisor detected: KVM Nov 13 08:27:05.057702 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 13 08:27:05.057710 kernel: kvm-clock: using sched offset of 4094180599 cycles Nov 13 08:27:05.057718 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 13 08:27:05.057726 kernel: tsc: Detected 1999.999 MHz processor Nov 13 08:27:05.057734 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 13 08:27:05.057742 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 13 08:27:05.057749 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Nov 13 08:27:05.057756 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 13 08:27:05.057764 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 13 08:27:05.057775 kernel: ACPI: Early table checksum verification disabled Nov 13 08:27:05.057782 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Nov 13 08:27:05.057790 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:27:05.057797 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:27:05.057804 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:27:05.057811 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 13 08:27:05.057818 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:27:05.057825 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:27:05.057832 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:27:05.057843 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:27:05.057850 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 13 08:27:05.057857 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 13 08:27:05.057865 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 13 08:27:05.057875 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 13 08:27:05.057887 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 13 08:27:05.057898 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 13 08:27:05.057920 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 13 08:27:05.057931 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 13 08:27:05.057941 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 13 08:27:05.057952 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 13 08:27:05.057963 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 13 08:27:05.057974 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Nov 13 08:27:05.057985 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Nov 13 08:27:05.058002 kernel: Zone ranges: Nov 13 08:27:05.058009 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 13 08:27:05.058017 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Nov 13 08:27:05.058024 kernel: Normal empty Nov 13 08:27:05.058031 kernel: Movable zone start for each node Nov 13 08:27:05.058039 kernel: Early memory node ranges Nov 13 08:27:05.058046 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 13 08:27:05.058053 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Nov 13 08:27:05.058061 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Nov 13 08:27:05.058072 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 13 08:27:05.058080 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 13 08:27:05.058088 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Nov 13 08:27:05.058095 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 13 08:27:05.058102 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 13 08:27:05.058110 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 13 08:27:05.058117 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 13 08:27:05.058125 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 13 08:27:05.058132 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 13 08:27:05.058143 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 13 08:27:05.058150 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 13 08:27:05.058158 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 13 08:27:05.058165 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 13 08:27:05.058176 kernel: TSC deadline timer available Nov 13 08:27:05.058190 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 13 08:27:05.058200 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 13 08:27:05.058211 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 13 08:27:05.058222 kernel: Booting paravirtualized kernel on KVM Nov 13 08:27:05.058239 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 13 08:27:05.058250 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 13 08:27:05.058261 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Nov 13 08:27:05.058273 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Nov 13 08:27:05.058284 kernel: pcpu-alloc: [0] 0 1 Nov 13 08:27:05.058294 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 13 08:27:05.058304 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 13 08:27:05.058312 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 13 08:27:05.058325 kernel: random: crng init done Nov 13 08:27:05.058333 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 13 08:27:05.058340 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 13 08:27:05.058347 kernel: Fallback order for Node 0: 0 Nov 13 08:27:05.058355 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Nov 13 08:27:05.058362 kernel: Policy zone: DMA32 Nov 13 08:27:05.058369 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 13 08:27:05.058377 kernel: Memory: 1971192K/2096600K available (12288K kernel code, 2305K rwdata, 22736K rodata, 42968K init, 2220K bss, 125148K reserved, 0K cma-reserved) Nov 13 08:27:05.058385 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 13 08:27:05.060192 kernel: Kernel/User page tables isolation: enabled Nov 13 08:27:05.060210 kernel: ftrace: allocating 37801 entries in 148 pages Nov 13 08:27:05.060224 kernel: ftrace: allocated 148 pages with 3 groups Nov 13 08:27:05.060237 kernel: Dynamic Preempt: voluntary Nov 13 08:27:05.060250 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 13 08:27:05.060264 kernel: rcu: RCU event tracing is enabled. Nov 13 08:27:05.060278 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 13 08:27:05.060291 kernel: Trampoline variant of Tasks RCU enabled. Nov 13 08:27:05.060305 kernel: Rude variant of Tasks RCU enabled. Nov 13 08:27:05.060324 kernel: Tracing variant of Tasks RCU enabled. Nov 13 08:27:05.060336 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 13 08:27:05.060349 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 13 08:27:05.060361 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 13 08:27:05.060373 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 13 08:27:05.060386 kernel: Console: colour VGA+ 80x25 Nov 13 08:27:05.060427 kernel: printk: console [tty0] enabled Nov 13 08:27:05.060436 kernel: printk: console [ttyS0] enabled Nov 13 08:27:05.060449 kernel: ACPI: Core revision 20230628 Nov 13 08:27:05.060468 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 13 08:27:05.060481 kernel: APIC: Switch to symmetric I/O mode setup Nov 13 08:27:05.060494 kernel: x2apic enabled Nov 13 08:27:05.060507 kernel: APIC: Switched APIC routing to: physical x2apic Nov 13 08:27:05.060520 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 13 08:27:05.060532 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Nov 13 08:27:05.060545 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Nov 13 08:27:05.060557 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 13 08:27:05.060570 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 13 08:27:05.060603 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 13 08:27:05.060613 kernel: Spectre V2 : Mitigation: Retpolines Nov 13 08:27:05.060621 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 13 08:27:05.060635 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 13 08:27:05.060643 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 13 08:27:05.060651 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 13 08:27:05.060660 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 13 08:27:05.060668 kernel: MDS: Mitigation: Clear CPU buffers Nov 13 08:27:05.060677 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 13 08:27:05.060691 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 13 08:27:05.060699 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 13 08:27:05.060707 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 13 08:27:05.060716 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 13 08:27:05.060724 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 13 08:27:05.060733 kernel: Freeing SMP alternatives memory: 32K Nov 13 08:27:05.060741 kernel: pid_max: default: 32768 minimum: 301 Nov 13 08:27:05.060749 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 13 08:27:05.060762 kernel: landlock: Up and running. Nov 13 08:27:05.060771 kernel: SELinux: Initializing. Nov 13 08:27:05.060779 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 13 08:27:05.060788 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 13 08:27:05.060797 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 13 08:27:05.060812 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 13 08:27:05.060828 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 13 08:27:05.060844 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 13 08:27:05.060864 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 13 08:27:05.060877 kernel: signal: max sigframe size: 1776 Nov 13 08:27:05.060890 kernel: rcu: Hierarchical SRCU implementation. Nov 13 08:27:05.060905 kernel: rcu: Max phase no-delay instances is 400. Nov 13 08:27:05.060918 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 13 08:27:05.060932 kernel: smp: Bringing up secondary CPUs ... Nov 13 08:27:05.060945 kernel: smpboot: x86: Booting SMP configuration: Nov 13 08:27:05.060959 kernel: .... node #0, CPUs: #1 Nov 13 08:27:05.060971 kernel: smp: Brought up 1 node, 2 CPUs Nov 13 08:27:05.060984 kernel: smpboot: Max logical packages: 1 Nov 13 08:27:05.060993 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Nov 13 08:27:05.061001 kernel: devtmpfs: initialized Nov 13 08:27:05.061010 kernel: x86/mm: Memory block size: 128MB Nov 13 08:27:05.061018 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 13 08:27:05.061026 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 13 08:27:05.061035 kernel: pinctrl core: initialized pinctrl subsystem Nov 13 08:27:05.061046 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 13 08:27:05.061060 kernel: audit: initializing netlink subsys (disabled) Nov 13 08:27:05.061080 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 13 08:27:05.061095 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 13 08:27:05.061109 kernel: audit: type=2000 audit(1731486423.585:1): state=initialized audit_enabled=0 res=1 Nov 13 08:27:05.061124 kernel: cpuidle: using governor menu Nov 13 08:27:05.061139 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 13 08:27:05.061154 kernel: dca service started, version 1.12.1 Nov 13 08:27:05.061168 kernel: PCI: Using configuration type 1 for base access Nov 13 08:27:05.061180 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 13 08:27:05.061188 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 13 08:27:05.061202 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 13 08:27:05.061211 kernel: ACPI: Added _OSI(Module Device) Nov 13 08:27:05.061220 kernel: ACPI: Added _OSI(Processor Device) Nov 13 08:27:05.061228 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 13 08:27:05.061236 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 13 08:27:05.061245 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 13 08:27:05.061253 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 13 08:27:05.061261 kernel: ACPI: Interpreter enabled Nov 13 08:27:05.061270 kernel: ACPI: PM: (supports S0 S5) Nov 13 08:27:05.061278 kernel: ACPI: Using IOAPIC for interrupt routing Nov 13 08:27:05.061290 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 13 08:27:05.061299 kernel: PCI: Using E820 reservations for host bridge windows Nov 13 08:27:05.061307 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 13 08:27:05.061316 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 13 08:27:05.061649 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 13 08:27:05.061800 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 13 08:27:05.061924 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 13 08:27:05.061944 kernel: acpiphp: Slot [3] registered Nov 13 08:27:05.061952 kernel: acpiphp: Slot [4] registered Nov 13 08:27:05.061961 kernel: acpiphp: Slot [5] registered Nov 13 08:27:05.061970 kernel: acpiphp: Slot [6] registered Nov 13 08:27:05.061979 kernel: acpiphp: Slot [7] registered Nov 13 08:27:05.061987 kernel: acpiphp: Slot [8] registered Nov 13 08:27:05.061995 kernel: acpiphp: Slot [9] registered Nov 13 08:27:05.062004 kernel: acpiphp: Slot [10] registered Nov 13 08:27:05.062013 kernel: acpiphp: Slot [11] registered Nov 13 08:27:05.062025 kernel: acpiphp: Slot [12] registered Nov 13 08:27:05.062034 kernel: acpiphp: Slot [13] registered Nov 13 08:27:05.062046 kernel: acpiphp: Slot [14] registered Nov 13 08:27:05.062059 kernel: acpiphp: Slot [15] registered Nov 13 08:27:05.062070 kernel: acpiphp: Slot [16] registered Nov 13 08:27:05.062082 kernel: acpiphp: Slot [17] registered Nov 13 08:27:05.062094 kernel: acpiphp: Slot [18] registered Nov 13 08:27:05.062106 kernel: acpiphp: Slot [19] registered Nov 13 08:27:05.062118 kernel: acpiphp: Slot [20] registered Nov 13 08:27:05.062138 kernel: acpiphp: Slot [21] registered Nov 13 08:27:05.062151 kernel: acpiphp: Slot [22] registered Nov 13 08:27:05.062164 kernel: acpiphp: Slot [23] registered Nov 13 08:27:05.062176 kernel: acpiphp: Slot [24] registered Nov 13 08:27:05.062188 kernel: acpiphp: Slot [25] registered Nov 13 08:27:05.062201 kernel: acpiphp: Slot [26] registered Nov 13 08:27:05.062212 kernel: acpiphp: Slot [27] registered Nov 13 08:27:05.062225 kernel: acpiphp: Slot [28] registered Nov 13 08:27:05.062236 kernel: acpiphp: Slot [29] registered Nov 13 08:27:05.062249 kernel: acpiphp: Slot [30] registered Nov 13 08:27:05.062267 kernel: acpiphp: Slot [31] registered Nov 13 08:27:05.062281 kernel: PCI host bridge to bus 0000:00 Nov 13 08:27:05.062525 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 13 08:27:05.062630 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 13 08:27:05.062791 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 13 08:27:05.062927 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 13 08:27:05.063040 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 13 08:27:05.063160 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 13 08:27:05.063298 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 13 08:27:05.064158 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 13 08:27:05.064352 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Nov 13 08:27:05.064527 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Nov 13 08:27:05.064647 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 13 08:27:05.064761 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 13 08:27:05.064860 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 13 08:27:05.064997 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 13 08:27:05.065130 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Nov 13 08:27:05.065231 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Nov 13 08:27:05.065350 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 13 08:27:05.065502 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 13 08:27:05.065647 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 13 08:27:05.065806 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Nov 13 08:27:05.065917 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Nov 13 08:27:05.066015 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Nov 13 08:27:05.066115 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Nov 13 08:27:05.066216 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Nov 13 08:27:05.066316 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 13 08:27:05.066745 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 13 08:27:05.066873 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Nov 13 08:27:05.067010 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Nov 13 08:27:05.067124 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Nov 13 08:27:05.067290 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 13 08:27:05.067447 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Nov 13 08:27:05.067603 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Nov 13 08:27:05.067719 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 13 08:27:05.067845 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Nov 13 08:27:05.067963 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Nov 13 08:27:05.068089 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Nov 13 08:27:05.068240 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 13 08:27:05.068423 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Nov 13 08:27:05.068551 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Nov 13 08:27:05.068673 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Nov 13 08:27:05.068796 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Nov 13 08:27:05.068907 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Nov 13 08:27:05.069005 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Nov 13 08:27:05.069102 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Nov 13 08:27:05.069197 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Nov 13 08:27:05.069303 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Nov 13 08:27:05.069428 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Nov 13 08:27:05.069546 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 13 08:27:05.069558 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 13 08:27:05.069567 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 13 08:27:05.069576 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 13 08:27:05.069584 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 13 08:27:05.069598 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 13 08:27:05.069606 kernel: iommu: Default domain type: Translated Nov 13 08:27:05.069615 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 13 08:27:05.069623 kernel: PCI: Using ACPI for IRQ routing Nov 13 08:27:05.069631 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 13 08:27:05.069641 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 13 08:27:05.069649 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Nov 13 08:27:05.069752 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 13 08:27:05.069889 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 13 08:27:05.070001 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 13 08:27:05.070012 kernel: vgaarb: loaded Nov 13 08:27:05.070021 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 13 08:27:05.070030 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 13 08:27:05.070039 kernel: clocksource: Switched to clocksource kvm-clock Nov 13 08:27:05.070048 kernel: VFS: Disk quotas dquot_6.6.0 Nov 13 08:27:05.070056 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 13 08:27:05.070065 kernel: pnp: PnP ACPI init Nov 13 08:27:05.070073 kernel: pnp: PnP ACPI: found 4 devices Nov 13 08:27:05.070086 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 13 08:27:05.070095 kernel: NET: Registered PF_INET protocol family Nov 13 08:27:05.070103 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 13 08:27:05.070112 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 13 08:27:05.070121 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 13 08:27:05.070130 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 13 08:27:05.070138 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 13 08:27:05.070147 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 13 08:27:05.070158 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 13 08:27:05.070166 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 13 08:27:05.070175 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 13 08:27:05.070183 kernel: NET: Registered PF_XDP protocol family Nov 13 08:27:05.070285 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 13 08:27:05.070426 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 13 08:27:05.070557 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 13 08:27:05.070705 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 13 08:27:05.070836 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 13 08:27:05.071014 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 13 08:27:05.071193 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 13 08:27:05.071216 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 13 08:27:05.072985 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 44814 usecs Nov 13 08:27:05.073057 kernel: PCI: CLS 0 bytes, default 64 Nov 13 08:27:05.073072 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 13 08:27:05.073086 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Nov 13 08:27:05.073102 kernel: Initialise system trusted keyrings Nov 13 08:27:05.073123 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 13 08:27:05.073132 kernel: Key type asymmetric registered Nov 13 08:27:05.073140 kernel: Asymmetric key parser 'x509' registered Nov 13 08:27:05.073149 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 13 08:27:05.073158 kernel: io scheduler mq-deadline registered Nov 13 08:27:05.073166 kernel: io scheduler kyber registered Nov 13 08:27:05.073175 kernel: io scheduler bfq registered Nov 13 08:27:05.073184 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 13 08:27:05.073193 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 13 08:27:05.073205 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 13 08:27:05.073213 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 13 08:27:05.073222 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 13 08:27:05.073230 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 13 08:27:05.073239 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 13 08:27:05.073248 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 13 08:27:05.073257 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 13 08:27:05.073523 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 13 08:27:05.073539 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Nov 13 08:27:05.073637 kernel: rtc_cmos 00:03: registered as rtc0 Nov 13 08:27:05.073758 kernel: rtc_cmos 00:03: setting system clock to 2024-11-13T08:27:04 UTC (1731486424) Nov 13 08:27:05.073854 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 13 08:27:05.073865 kernel: intel_pstate: CPU model not supported Nov 13 08:27:05.073874 kernel: NET: Registered PF_INET6 protocol family Nov 13 08:27:05.073882 kernel: Segment Routing with IPv6 Nov 13 08:27:05.073891 kernel: In-situ OAM (IOAM) with IPv6 Nov 13 08:27:05.073899 kernel: NET: Registered PF_PACKET protocol family Nov 13 08:27:05.073913 kernel: Key type dns_resolver registered Nov 13 08:27:05.073921 kernel: IPI shorthand broadcast: enabled Nov 13 08:27:05.073930 kernel: sched_clock: Marking stable (1425007784, 165409506)->(1644232742, -53815452) Nov 13 08:27:05.073938 kernel: registered taskstats version 1 Nov 13 08:27:05.073946 kernel: Loading compiled-in X.509 certificates Nov 13 08:27:05.073955 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: d04cb2ddbd5c3ca82936c51f5645ef0dcbdcd3b4' Nov 13 08:27:05.073963 kernel: Key type .fscrypt registered Nov 13 08:27:05.073972 kernel: Key type fscrypt-provisioning registered Nov 13 08:27:05.073980 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 13 08:27:05.073993 kernel: ima: Allocated hash algorithm: sha1 Nov 13 08:27:05.074001 kernel: ima: No architecture policies found Nov 13 08:27:05.074009 kernel: clk: Disabling unused clocks Nov 13 08:27:05.074018 kernel: Freeing unused kernel image (initmem) memory: 42968K Nov 13 08:27:05.074026 kernel: Write protecting the kernel read-only data: 36864k Nov 13 08:27:05.074056 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Nov 13 08:27:05.074067 kernel: Run /init as init process Nov 13 08:27:05.074075 kernel: with arguments: Nov 13 08:27:05.074085 kernel: /init Nov 13 08:27:05.074096 kernel: with environment: Nov 13 08:27:05.074104 kernel: HOME=/ Nov 13 08:27:05.074112 kernel: TERM=linux Nov 13 08:27:05.074121 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 13 08:27:05.074134 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 13 08:27:05.074147 systemd[1]: Detected virtualization kvm. Nov 13 08:27:05.074157 systemd[1]: Detected architecture x86-64. Nov 13 08:27:05.074169 systemd[1]: Running in initrd. Nov 13 08:27:05.074178 systemd[1]: No hostname configured, using default hostname. Nov 13 08:27:05.074187 systemd[1]: Hostname set to . Nov 13 08:27:05.074196 systemd[1]: Initializing machine ID from VM UUID. Nov 13 08:27:05.074205 systemd[1]: Queued start job for default target initrd.target. Nov 13 08:27:05.074214 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 13 08:27:05.074223 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 13 08:27:05.074234 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 13 08:27:05.074246 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 13 08:27:05.074255 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 13 08:27:05.074265 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 13 08:27:05.074275 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 13 08:27:05.074285 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 13 08:27:05.074294 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 13 08:27:05.074303 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 13 08:27:05.074316 systemd[1]: Reached target paths.target - Path Units. Nov 13 08:27:05.074325 systemd[1]: Reached target slices.target - Slice Units. Nov 13 08:27:05.074334 systemd[1]: Reached target swap.target - Swaps. Nov 13 08:27:05.074347 systemd[1]: Reached target timers.target - Timer Units. Nov 13 08:27:05.074358 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 13 08:27:05.074372 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 13 08:27:05.074412 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 13 08:27:05.074426 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 13 08:27:05.074439 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 13 08:27:05.074451 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 13 08:27:05.074464 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 13 08:27:05.074477 systemd[1]: Reached target sockets.target - Socket Units. Nov 13 08:27:05.074489 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 13 08:27:05.074502 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 13 08:27:05.074525 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 13 08:27:05.074538 systemd[1]: Starting systemd-fsck-usr.service... Nov 13 08:27:05.074552 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 13 08:27:05.074566 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 13 08:27:05.074580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 08:27:05.074641 systemd-journald[184]: Collecting audit messages is disabled. Nov 13 08:27:05.074670 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 13 08:27:05.074705 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 13 08:27:05.074723 systemd-journald[184]: Journal started Nov 13 08:27:05.074756 systemd-journald[184]: Runtime Journal (/run/log/journal/112dad47252846fd964aa739715fa3a6) is 4.9M, max 39.3M, 34.4M free. Nov 13 08:27:05.084446 systemd[1]: Started systemd-journald.service - Journal Service. Nov 13 08:27:05.086236 systemd[1]: Finished systemd-fsck-usr.service. Nov 13 08:27:05.096148 systemd-modules-load[185]: Inserted module 'overlay' Nov 13 08:27:05.155942 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 13 08:27:05.155979 kernel: Bridge firewalling registered Nov 13 08:27:05.108827 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 13 08:27:05.140688 systemd-modules-load[185]: Inserted module 'br_netfilter' Nov 13 08:27:05.158772 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 13 08:27:05.161485 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 13 08:27:05.169016 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 08:27:05.178841 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 13 08:27:05.186912 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 13 08:27:05.189111 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 13 08:27:05.191197 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 13 08:27:05.201749 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 13 08:27:05.214979 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 13 08:27:05.227719 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 13 08:27:05.230218 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 08:27:05.232133 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 13 08:27:05.238726 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 13 08:27:05.281431 dracut-cmdline[219]: dracut-dracut-053 Nov 13 08:27:05.287274 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 13 08:27:05.289196 systemd-resolved[216]: Positive Trust Anchors: Nov 13 08:27:05.289217 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 13 08:27:05.289277 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 13 08:27:05.295233 systemd-resolved[216]: Defaulting to hostname 'linux'. Nov 13 08:27:05.298906 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 13 08:27:05.300616 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 13 08:27:05.432455 kernel: SCSI subsystem initialized Nov 13 08:27:05.448474 kernel: Loading iSCSI transport class v2.0-870. Nov 13 08:27:05.466470 kernel: iscsi: registered transport (tcp) Nov 13 08:27:05.495543 kernel: iscsi: registered transport (qla4xxx) Nov 13 08:27:05.495676 kernel: QLogic iSCSI HBA Driver Nov 13 08:27:05.566957 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 13 08:27:05.573768 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 13 08:27:05.628743 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 13 08:27:05.628890 kernel: device-mapper: uevent: version 1.0.3 Nov 13 08:27:05.630787 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 13 08:27:05.687564 kernel: raid6: avx2x4 gen() 22555 MB/s Nov 13 08:27:05.705492 kernel: raid6: avx2x2 gen() 20485 MB/s Nov 13 08:27:05.722941 kernel: raid6: avx2x1 gen() 11863 MB/s Nov 13 08:27:05.723064 kernel: raid6: using algorithm avx2x4 gen() 22555 MB/s Nov 13 08:27:05.742762 kernel: raid6: .... xor() 6379 MB/s, rmw enabled Nov 13 08:27:05.742847 kernel: raid6: using avx2x2 recovery algorithm Nov 13 08:27:05.770497 kernel: xor: automatically using best checksumming function avx Nov 13 08:27:05.998458 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 13 08:27:06.017204 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 13 08:27:06.025796 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 13 08:27:06.059705 systemd-udevd[402]: Using default interface naming scheme 'v255'. Nov 13 08:27:06.067748 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 13 08:27:06.076991 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 13 08:27:06.102257 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Nov 13 08:27:06.155705 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 13 08:27:06.169828 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 13 08:27:06.252694 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 13 08:27:06.264869 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 13 08:27:06.298583 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 13 08:27:06.309013 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 13 08:27:06.310762 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 13 08:27:06.313205 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 13 08:27:06.320676 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 13 08:27:06.358098 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 13 08:27:06.373454 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 13 08:27:06.457140 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 13 08:27:06.457383 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 13 08:27:06.457429 kernel: GPT:9289727 != 125829119 Nov 13 08:27:06.457450 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 13 08:27:06.457485 kernel: GPT:9289727 != 125829119 Nov 13 08:27:06.457504 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 13 08:27:06.457525 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 08:27:06.457545 kernel: scsi host0: Virtio SCSI HBA Nov 13 08:27:06.457770 kernel: cryptd: max_cpu_qlen set to 1000 Nov 13 08:27:06.457792 kernel: ACPI: bus type USB registered Nov 13 08:27:06.457810 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 13 08:27:06.554899 kernel: usbcore: registered new interface driver usbfs Nov 13 08:27:06.554954 kernel: usbcore: registered new interface driver hub Nov 13 08:27:06.554972 kernel: usbcore: registered new device driver usb Nov 13 08:27:06.554990 kernel: AVX2 version of gcm_enc/dec engaged. Nov 13 08:27:06.555007 kernel: AES CTR mode by8 optimization enabled Nov 13 08:27:06.555024 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Nov 13 08:27:06.555269 kernel: libata version 3.00 loaded. Nov 13 08:27:06.555289 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 13 08:27:06.571645 kernel: scsi host1: ata_piix Nov 13 08:27:06.571952 kernel: scsi host2: ata_piix Nov 13 08:27:06.572132 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Nov 13 08:27:06.572153 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Nov 13 08:27:06.534065 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 13 08:27:06.534288 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 08:27:06.537038 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 13 08:27:06.537922 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 13 08:27:06.538542 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 08:27:06.540638 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 08:27:06.553646 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 08:27:06.621758 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (452) Nov 13 08:27:06.621709 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 13 08:27:06.678142 kernel: BTRFS: device fsid d498af32-b44b-4318-a942-3a646ccb9d0a devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (459) Nov 13 08:27:06.679257 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 08:27:06.703223 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 13 08:27:06.710279 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 13 08:27:06.711342 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 13 08:27:06.720996 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 13 08:27:06.730707 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 13 08:27:06.741702 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 13 08:27:06.755682 disk-uuid[538]: Primary Header is updated. Nov 13 08:27:06.755682 disk-uuid[538]: Secondary Entries is updated. Nov 13 08:27:06.755682 disk-uuid[538]: Secondary Header is updated. Nov 13 08:27:06.770450 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 13 08:27:06.784035 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 08:27:06.784054 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 13 08:27:06.784219 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 13 08:27:06.785605 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 13 08:27:06.785842 kernel: hub 1-0:1.0: USB hub found Nov 13 08:27:06.785998 kernel: hub 1-0:1.0: 2 ports detected Nov 13 08:27:06.786119 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 08:27:06.789886 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 08:27:07.789503 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 08:27:07.791959 disk-uuid[539]: The operation has completed successfully. Nov 13 08:27:07.853739 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 13 08:27:07.853959 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 13 08:27:07.867790 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 13 08:27:07.885899 sh[558]: Success Nov 13 08:27:07.907445 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 13 08:27:07.988716 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 13 08:27:08.012611 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 13 08:27:08.015052 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 13 08:27:08.054671 kernel: BTRFS info (device dm-0): first mount of filesystem d498af32-b44b-4318-a942-3a646ccb9d0a Nov 13 08:27:08.054781 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 13 08:27:08.054794 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 13 08:27:08.054806 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 13 08:27:08.056863 kernel: BTRFS info (device dm-0): using free space tree Nov 13 08:27:08.067780 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 13 08:27:08.069680 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 13 08:27:08.080677 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 13 08:27:08.085691 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 13 08:27:08.105084 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 13 08:27:08.105185 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 13 08:27:08.105207 kernel: BTRFS info (device vda6): using free space tree Nov 13 08:27:08.113450 kernel: BTRFS info (device vda6): auto enabling async discard Nov 13 08:27:08.135096 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 13 08:27:08.136210 kernel: BTRFS info (device vda6): last unmount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 13 08:27:08.147011 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 13 08:27:08.156904 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 13 08:27:08.266763 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 13 08:27:08.277819 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 13 08:27:08.310866 systemd-networkd[743]: lo: Link UP Nov 13 08:27:08.310881 systemd-networkd[743]: lo: Gained carrier Nov 13 08:27:08.315526 systemd-networkd[743]: Enumeration completed Nov 13 08:27:08.316015 systemd-networkd[743]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 13 08:27:08.316019 systemd-networkd[743]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 13 08:27:08.316557 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 13 08:27:08.317772 systemd[1]: Reached target network.target - Network. Nov 13 08:27:08.320150 systemd-networkd[743]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 13 08:27:08.320155 systemd-networkd[743]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 13 08:27:08.322733 systemd-networkd[743]: eth0: Link UP Nov 13 08:27:08.322740 systemd-networkd[743]: eth0: Gained carrier Nov 13 08:27:08.322756 systemd-networkd[743]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 13 08:27:08.327837 systemd-networkd[743]: eth1: Link UP Nov 13 08:27:08.327843 systemd-networkd[743]: eth1: Gained carrier Nov 13 08:27:08.327861 systemd-networkd[743]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 13 08:27:08.342536 systemd-networkd[743]: eth0: DHCPv4 address 209.38.128.242/19, gateway 209.38.128.1 acquired from 169.254.169.253 Nov 13 08:27:08.349551 systemd-networkd[743]: eth1: DHCPv4 address 10.124.0.14/20 acquired from 169.254.169.253 Nov 13 08:27:08.360357 ignition[651]: Ignition 2.20.0 Nov 13 08:27:08.360375 ignition[651]: Stage: fetch-offline Nov 13 08:27:08.360463 ignition[651]: no configs at "/usr/lib/ignition/base.d" Nov 13 08:27:08.360478 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:27:08.363361 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 13 08:27:08.360704 ignition[651]: parsed url from cmdline: "" Nov 13 08:27:08.360716 ignition[651]: no config URL provided Nov 13 08:27:08.360725 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" Nov 13 08:27:08.360737 ignition[651]: no config at "/usr/lib/ignition/user.ign" Nov 13 08:27:08.360746 ignition[651]: failed to fetch config: resource requires networking Nov 13 08:27:08.361036 ignition[651]: Ignition finished successfully Nov 13 08:27:08.379474 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 13 08:27:08.397159 ignition[753]: Ignition 2.20.0 Nov 13 08:27:08.397174 ignition[753]: Stage: fetch Nov 13 08:27:08.397463 ignition[753]: no configs at "/usr/lib/ignition/base.d" Nov 13 08:27:08.397477 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:27:08.397589 ignition[753]: parsed url from cmdline: "" Nov 13 08:27:08.397593 ignition[753]: no config URL provided Nov 13 08:27:08.397599 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Nov 13 08:27:08.397607 ignition[753]: no config at "/usr/lib/ignition/user.ign" Nov 13 08:27:08.397632 ignition[753]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 13 08:27:08.412948 ignition[753]: GET result: OK Nov 13 08:27:08.413997 ignition[753]: parsing config with SHA512: 90e5dec0146b201cf7938de9b452f23179780bfe51e84f039ba0a527a50d2191d84a71a48ea8cfd1a6794e26c52dbf651cd6068856c449169c3a87e5fa8a2284 Nov 13 08:27:08.420098 unknown[753]: fetched base config from "system" Nov 13 08:27:08.420133 unknown[753]: fetched base config from "system" Nov 13 08:27:08.420812 ignition[753]: fetch: fetch complete Nov 13 08:27:08.420154 unknown[753]: fetched user config from "digitalocean" Nov 13 08:27:08.420822 ignition[753]: fetch: fetch passed Nov 13 08:27:08.423805 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 13 08:27:08.420904 ignition[753]: Ignition finished successfully Nov 13 08:27:08.431775 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 13 08:27:08.469452 ignition[759]: Ignition 2.20.0 Nov 13 08:27:08.469476 ignition[759]: Stage: kargs Nov 13 08:27:08.469787 ignition[759]: no configs at "/usr/lib/ignition/base.d" Nov 13 08:27:08.469805 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:27:08.475665 ignition[759]: kargs: kargs passed Nov 13 08:27:08.476595 ignition[759]: Ignition finished successfully Nov 13 08:27:08.479600 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 13 08:27:08.483814 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 13 08:27:08.506175 ignition[765]: Ignition 2.20.0 Nov 13 08:27:08.506192 ignition[765]: Stage: disks Nov 13 08:27:08.506495 ignition[765]: no configs at "/usr/lib/ignition/base.d" Nov 13 08:27:08.506510 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:27:08.509715 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 13 08:27:08.507774 ignition[765]: disks: disks passed Nov 13 08:27:08.507885 ignition[765]: Ignition finished successfully Nov 13 08:27:08.515531 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 13 08:27:08.517029 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 13 08:27:08.518840 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 13 08:27:08.520493 systemd[1]: Reached target sysinit.target - System Initialization. Nov 13 08:27:08.522189 systemd[1]: Reached target basic.target - Basic System. Nov 13 08:27:08.530806 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 13 08:27:08.553209 systemd-fsck[773]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 13 08:27:08.558276 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 13 08:27:08.565450 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 13 08:27:08.682449 kernel: EXT4-fs (vda9): mounted filesystem 62325592-ead9-4e81-b706-99baa0cf9fff r/w with ordered data mode. Quota mode: none. Nov 13 08:27:08.682237 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 13 08:27:08.683753 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 13 08:27:08.689582 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 13 08:27:08.693493 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 13 08:27:08.697792 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Nov 13 08:27:08.707624 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 13 08:27:08.711312 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (781) Nov 13 08:27:08.711710 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 13 08:27:08.728646 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 13 08:27:08.728690 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 13 08:27:08.728712 kernel: BTRFS info (device vda6): using free space tree Nov 13 08:27:08.711773 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 13 08:27:08.721308 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 13 08:27:08.739165 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 13 08:27:08.747447 kernel: BTRFS info (device vda6): auto enabling async discard Nov 13 08:27:08.755127 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 13 08:27:08.819781 coreos-metadata[783]: Nov 13 08:27:08.819 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 13 08:27:08.837428 coreos-metadata[783]: Nov 13 08:27:08.835 INFO Fetch successful Nov 13 08:27:08.844317 coreos-metadata[784]: Nov 13 08:27:08.843 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 13 08:27:08.848884 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Nov 13 08:27:08.849107 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Nov 13 08:27:08.853684 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Nov 13 08:27:08.858970 coreos-metadata[784]: Nov 13 08:27:08.858 INFO Fetch successful Nov 13 08:27:08.862499 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Nov 13 08:27:08.867937 coreos-metadata[784]: Nov 13 08:27:08.867 INFO wrote hostname ci-4152.0.0-d-03c8fd271e to /sysroot/etc/hostname Nov 13 08:27:08.871701 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 13 08:27:08.876147 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Nov 13 08:27:08.883415 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Nov 13 08:27:09.025229 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 13 08:27:09.031607 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 13 08:27:09.038321 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 13 08:27:09.050055 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 13 08:27:09.051740 kernel: BTRFS info (device vda6): last unmount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 13 08:27:09.085542 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 13 08:27:09.095771 ignition[902]: INFO : Ignition 2.20.0 Nov 13 08:27:09.097575 ignition[902]: INFO : Stage: mount Nov 13 08:27:09.097575 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 13 08:27:09.097575 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:27:09.101831 ignition[902]: INFO : mount: mount passed Nov 13 08:27:09.101831 ignition[902]: INFO : Ignition finished successfully Nov 13 08:27:09.100913 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 13 08:27:09.120817 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 13 08:27:09.143784 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 13 08:27:09.157116 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (914) Nov 13 08:27:09.157202 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 13 08:27:09.158101 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 13 08:27:09.159764 kernel: BTRFS info (device vda6): using free space tree Nov 13 08:27:09.166438 kernel: BTRFS info (device vda6): auto enabling async discard Nov 13 08:27:09.168224 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 13 08:27:09.200419 ignition[931]: INFO : Ignition 2.20.0 Nov 13 08:27:09.200419 ignition[931]: INFO : Stage: files Nov 13 08:27:09.201941 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 13 08:27:09.201941 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:27:09.203814 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Nov 13 08:27:09.203814 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 13 08:27:09.203814 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 13 08:27:09.208489 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 13 08:27:09.209389 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 13 08:27:09.209389 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 13 08:27:09.209232 unknown[931]: wrote ssh authorized keys file for user: core Nov 13 08:27:09.211963 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 13 08:27:09.211963 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 13 08:27:09.259099 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 13 08:27:09.337954 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 13 08:27:09.337954 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 13 08:27:09.337954 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 13 08:27:09.679994 systemd-networkd[743]: eth0: Gained IPv6LL Nov 13 08:27:09.820771 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 13 08:27:09.905972 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 13 08:27:09.905972 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 13 08:27:09.905972 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 13 08:27:09.905972 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 13 08:27:09.913327 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 13 08:27:09.913327 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 13 08:27:09.913327 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 13 08:27:09.913327 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 13 08:27:09.913327 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 13 08:27:09.913327 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 13 08:27:09.926728 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 13 08:27:09.926728 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 13 08:27:09.926728 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 13 08:27:09.926728 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 13 08:27:09.926728 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Nov 13 08:27:10.168181 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 13 08:27:10.255826 systemd-networkd[743]: eth1: Gained IPv6LL Nov 13 08:27:10.560532 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 13 08:27:10.560532 ignition[931]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 13 08:27:10.565101 ignition[931]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 13 08:27:10.565101 ignition[931]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 13 08:27:10.565101 ignition[931]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 13 08:27:10.565101 ignition[931]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Nov 13 08:27:10.565101 ignition[931]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Nov 13 08:27:10.565101 ignition[931]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 13 08:27:10.565101 ignition[931]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 13 08:27:10.565101 ignition[931]: INFO : files: files passed Nov 13 08:27:10.565101 ignition[931]: INFO : Ignition finished successfully Nov 13 08:27:10.566261 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 13 08:27:10.576869 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 13 08:27:10.587737 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 13 08:27:10.593575 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 13 08:27:10.593727 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 13 08:27:10.611443 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 13 08:27:10.611443 initrd-setup-root-after-ignition[960]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 13 08:27:10.614718 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 13 08:27:10.614577 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 13 08:27:10.616850 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 13 08:27:10.623815 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 13 08:27:10.689800 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 13 08:27:10.690032 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 13 08:27:10.693050 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 13 08:27:10.693970 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 13 08:27:10.694856 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 13 08:27:10.702009 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 13 08:27:10.727750 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 13 08:27:10.735009 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 13 08:27:10.756710 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 13 08:27:10.757867 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 13 08:27:10.759722 systemd[1]: Stopped target timers.target - Timer Units. Nov 13 08:27:10.760905 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 13 08:27:10.761136 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 13 08:27:10.762744 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 13 08:27:10.763576 systemd[1]: Stopped target basic.target - Basic System. Nov 13 08:27:10.765301 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 13 08:27:10.766911 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 13 08:27:10.768371 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 13 08:27:10.769898 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 13 08:27:10.771682 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 13 08:27:10.773386 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 13 08:27:10.774958 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 13 08:27:10.776521 systemd[1]: Stopped target swap.target - Swaps. Nov 13 08:27:10.777703 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 13 08:27:10.777991 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 13 08:27:10.779730 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 13 08:27:10.781327 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 13 08:27:10.782772 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 13 08:27:10.782984 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 13 08:27:10.784431 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 13 08:27:10.784697 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 13 08:27:10.786347 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 13 08:27:10.786691 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 13 08:27:10.788568 systemd[1]: ignition-files.service: Deactivated successfully. Nov 13 08:27:10.788846 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 13 08:27:10.790083 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 13 08:27:10.790219 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 13 08:27:10.806730 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 13 08:27:10.810614 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 13 08:27:10.811381 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 13 08:27:10.811619 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 13 08:27:10.814739 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 13 08:27:10.814924 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 13 08:27:10.828981 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 13 08:27:10.830260 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 13 08:27:10.841802 ignition[984]: INFO : Ignition 2.20.0 Nov 13 08:27:10.843274 ignition[984]: INFO : Stage: umount Nov 13 08:27:10.844660 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 13 08:27:10.846305 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:27:10.855347 ignition[984]: INFO : umount: umount passed Nov 13 08:27:10.855347 ignition[984]: INFO : Ignition finished successfully Nov 13 08:27:10.857177 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 13 08:27:10.884256 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 13 08:27:10.884533 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 13 08:27:10.992509 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 13 08:27:10.992698 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 13 08:27:10.994318 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 13 08:27:10.994446 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 13 08:27:11.073654 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 13 08:27:11.073755 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 13 08:27:11.077079 systemd[1]: Stopped target network.target - Network. Nov 13 08:27:11.078434 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 13 08:27:11.078555 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 13 08:27:11.082467 systemd[1]: Stopped target paths.target - Path Units. Nov 13 08:27:11.083226 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 13 08:27:11.083850 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 13 08:27:11.084996 systemd[1]: Stopped target slices.target - Slice Units. Nov 13 08:27:11.086576 systemd[1]: Stopped target sockets.target - Socket Units. Nov 13 08:27:11.088215 systemd[1]: iscsid.socket: Deactivated successfully. Nov 13 08:27:11.088353 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 13 08:27:11.089953 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 13 08:27:11.090037 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 13 08:27:11.091656 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 13 08:27:11.091767 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 13 08:27:11.093273 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 13 08:27:11.093374 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 13 08:27:11.094948 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 13 08:27:11.096815 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 13 08:27:11.098698 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 13 08:27:11.098841 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 13 08:27:11.100554 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 13 08:27:11.100706 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 13 08:27:11.100770 systemd-networkd[743]: eth0: DHCPv6 lease lost Nov 13 08:27:11.105059 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 13 08:27:11.105193 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 13 08:27:11.105507 systemd-networkd[743]: eth1: DHCPv6 lease lost Nov 13 08:27:11.110078 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 13 08:27:11.110863 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 13 08:27:11.113797 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 13 08:27:11.113920 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 13 08:27:11.120742 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 13 08:27:11.123873 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 13 08:27:11.123986 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 13 08:27:11.124848 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 13 08:27:11.124916 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 13 08:27:11.126223 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 13 08:27:11.126371 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 13 08:27:11.127739 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 13 08:27:11.127829 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 13 08:27:11.132190 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 13 08:27:11.149041 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 13 08:27:11.149311 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 13 08:27:11.153773 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 13 08:27:11.153875 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 13 08:27:11.154909 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 13 08:27:11.154982 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 13 08:27:11.156081 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 13 08:27:11.156178 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 13 08:27:11.158094 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 13 08:27:11.158157 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 13 08:27:11.159769 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 13 08:27:11.159841 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 08:27:11.168726 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 13 08:27:11.171052 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 13 08:27:11.171179 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 13 08:27:11.173101 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 13 08:27:11.173238 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 13 08:27:11.174416 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 13 08:27:11.174523 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 13 08:27:11.177596 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 13 08:27:11.177680 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 08:27:11.180201 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 13 08:27:11.180364 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 13 08:27:11.182439 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 13 08:27:11.182595 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 13 08:27:11.185835 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 13 08:27:11.193931 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 13 08:27:11.211118 systemd[1]: Switching root. Nov 13 08:27:11.342946 systemd-journald[184]: Journal stopped Nov 13 08:27:12.983970 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Nov 13 08:27:12.984169 kernel: SELinux: policy capability network_peer_controls=1 Nov 13 08:27:12.984197 kernel: SELinux: policy capability open_perms=1 Nov 13 08:27:12.984222 kernel: SELinux: policy capability extended_socket_class=1 Nov 13 08:27:12.984241 kernel: SELinux: policy capability always_check_network=0 Nov 13 08:27:12.984260 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 13 08:27:12.984281 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 13 08:27:12.984305 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 13 08:27:12.984323 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 13 08:27:12.984346 kernel: audit: type=1403 audit(1731486431.538:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 13 08:27:12.984368 systemd[1]: Successfully loaded SELinux policy in 52.251ms. Nov 13 08:27:12.988503 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.092ms. Nov 13 08:27:12.988569 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 13 08:27:12.988593 systemd[1]: Detected virtualization kvm. Nov 13 08:27:12.988615 systemd[1]: Detected architecture x86-64. Nov 13 08:27:12.988636 systemd[1]: Detected first boot. Nov 13 08:27:12.988659 systemd[1]: Hostname set to . Nov 13 08:27:12.988680 systemd[1]: Initializing machine ID from VM UUID. Nov 13 08:27:12.988848 zram_generator::config[1027]: No configuration found. Nov 13 08:27:12.988879 systemd[1]: Populated /etc with preset unit settings. Nov 13 08:27:12.988898 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 13 08:27:12.988916 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 13 08:27:12.988937 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 13 08:27:12.988958 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 13 08:27:12.988987 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 13 08:27:12.989009 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 13 08:27:12.989038 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 13 08:27:12.989057 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 13 08:27:12.989074 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 13 08:27:12.989092 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 13 08:27:12.989110 systemd[1]: Created slice user.slice - User and Session Slice. Nov 13 08:27:12.989127 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 13 08:27:12.989146 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 13 08:27:12.989162 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 13 08:27:12.989178 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 13 08:27:12.989201 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 13 08:27:12.989219 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 13 08:27:12.989238 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 13 08:27:12.989254 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 13 08:27:12.989271 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 13 08:27:12.989291 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 13 08:27:12.989314 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 13 08:27:12.989332 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 13 08:27:12.989358 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 13 08:27:12.989375 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 13 08:27:12.991790 systemd[1]: Reached target slices.target - Slice Units. Nov 13 08:27:12.991845 systemd[1]: Reached target swap.target - Swaps. Nov 13 08:27:12.991867 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 13 08:27:12.991889 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 13 08:27:12.991911 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 13 08:27:12.991944 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 13 08:27:12.991965 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 13 08:27:12.991986 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 13 08:27:12.992007 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 13 08:27:12.992024 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 13 08:27:12.992047 systemd[1]: Mounting media.mount - External Media Directory... Nov 13 08:27:12.992070 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:27:12.992091 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 13 08:27:12.992135 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 13 08:27:12.992160 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 13 08:27:12.992186 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 13 08:27:12.992207 systemd[1]: Reached target machines.target - Containers. Nov 13 08:27:12.992230 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 13 08:27:12.992252 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 08:27:12.992274 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 13 08:27:12.992294 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 13 08:27:12.992317 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 13 08:27:12.992342 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 13 08:27:12.992363 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 13 08:27:12.992383 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 13 08:27:12.995461 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 13 08:27:12.995493 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 13 08:27:12.995514 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 13 08:27:12.995534 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 13 08:27:12.995561 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 13 08:27:12.995600 systemd[1]: Stopped systemd-fsck-usr.service. Nov 13 08:27:12.995618 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 13 08:27:12.995635 kernel: fuse: init (API version 7.39) Nov 13 08:27:12.995654 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 13 08:27:12.995674 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 13 08:27:12.995694 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 13 08:27:12.995714 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 13 08:27:12.995735 systemd[1]: verity-setup.service: Deactivated successfully. Nov 13 08:27:12.995755 systemd[1]: Stopped verity-setup.service. Nov 13 08:27:12.995776 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:27:12.995801 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 13 08:27:12.995822 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 13 08:27:12.995843 systemd[1]: Mounted media.mount - External Media Directory. Nov 13 08:27:12.995863 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 13 08:27:12.995882 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 13 08:27:12.995902 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 13 08:27:12.995927 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 13 08:27:12.995951 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 13 08:27:12.995971 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 13 08:27:12.995990 kernel: loop: module loaded Nov 13 08:27:12.996012 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 13 08:27:12.996033 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 13 08:27:12.996052 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 13 08:27:12.996072 kernel: ACPI: bus type drm_connector registered Nov 13 08:27:12.996092 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 13 08:27:12.996114 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 13 08:27:12.996135 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 13 08:27:12.996154 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 13 08:27:12.996177 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 13 08:27:12.996195 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 13 08:27:12.996213 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 13 08:27:12.996233 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 13 08:27:12.996362 systemd-journald[1103]: Collecting audit messages is disabled. Nov 13 08:27:12.996599 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 13 08:27:12.996626 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 13 08:27:12.996646 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 13 08:27:12.996672 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 13 08:27:12.996693 systemd-journald[1103]: Journal started Nov 13 08:27:12.996733 systemd-journald[1103]: Runtime Journal (/run/log/journal/112dad47252846fd964aa739715fa3a6) is 4.9M, max 39.3M, 34.4M free. Nov 13 08:27:13.001304 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 13 08:27:12.478152 systemd[1]: Queued start job for default target multi-user.target. Nov 13 08:27:12.502567 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 13 08:27:12.503186 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 13 08:27:13.018360 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 13 08:27:13.022409 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 13 08:27:13.024438 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 13 08:27:13.029433 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 13 08:27:13.049488 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 13 08:27:13.063285 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 13 08:27:13.063455 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 08:27:13.079707 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 13 08:27:13.085448 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 13 08:27:13.098166 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 13 08:27:13.098292 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 13 08:27:13.113571 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 13 08:27:13.127438 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 13 08:27:13.152187 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 13 08:27:13.158375 systemd[1]: Started systemd-journald.service - Journal Service. Nov 13 08:27:13.163561 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 13 08:27:13.170838 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 13 08:27:13.172779 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 13 08:27:13.175561 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 13 08:27:13.177462 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 13 08:27:13.220427 kernel: loop0: detected capacity change from 0 to 140992 Nov 13 08:27:13.240105 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 13 08:27:13.261842 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 13 08:27:13.275869 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 13 08:27:13.302077 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 13 08:27:13.300711 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 13 08:27:13.304634 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 13 08:27:13.317985 systemd-tmpfiles[1131]: ACLs are not supported, ignoring. Nov 13 08:27:13.318005 systemd-tmpfiles[1131]: ACLs are not supported, ignoring. Nov 13 08:27:13.322720 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 13 08:27:13.323742 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 13 08:27:13.340765 kernel: loop1: detected capacity change from 0 to 210664 Nov 13 08:27:13.336842 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 13 08:27:13.346981 systemd-journald[1103]: Time spent on flushing to /var/log/journal/112dad47252846fd964aa739715fa3a6 is 102.911ms for 1006 entries. Nov 13 08:27:13.346981 systemd-journald[1103]: System Journal (/var/log/journal/112dad47252846fd964aa739715fa3a6) is 8.0M, max 195.6M, 187.6M free. Nov 13 08:27:13.477681 systemd-journald[1103]: Received client request to flush runtime journal. Nov 13 08:27:13.477771 kernel: loop2: detected capacity change from 0 to 8 Nov 13 08:27:13.348766 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 13 08:27:13.380671 udevadm[1161]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 13 08:27:13.461140 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 13 08:27:13.472794 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 13 08:27:13.479900 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 13 08:27:13.519789 kernel: loop3: detected capacity change from 0 to 138184 Nov 13 08:27:13.557022 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Nov 13 08:27:13.557053 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Nov 13 08:27:13.583132 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 13 08:27:13.597612 kernel: loop4: detected capacity change from 0 to 140992 Nov 13 08:27:13.628684 kernel: loop5: detected capacity change from 0 to 210664 Nov 13 08:27:13.651733 kernel: loop6: detected capacity change from 0 to 8 Nov 13 08:27:13.660541 kernel: loop7: detected capacity change from 0 to 138184 Nov 13 08:27:13.685487 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Nov 13 08:27:13.686352 (sd-merge)[1176]: Merged extensions into '/usr'. Nov 13 08:27:13.697586 systemd[1]: Reloading requested from client PID 1130 ('systemd-sysext') (unit systemd-sysext.service)... Nov 13 08:27:13.698483 systemd[1]: Reloading... Nov 13 08:27:13.860430 zram_generator::config[1198]: No configuration found. Nov 13 08:27:14.075957 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 08:27:14.078195 ldconfig[1126]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 13 08:27:14.134380 systemd[1]: Reloading finished in 435 ms. Nov 13 08:27:14.162031 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 13 08:27:14.164733 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 13 08:27:14.176767 systemd[1]: Starting ensure-sysext.service... Nov 13 08:27:14.191748 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 13 08:27:14.218709 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Nov 13 08:27:14.218744 systemd[1]: Reloading... Nov 13 08:27:14.249118 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 13 08:27:14.252168 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 13 08:27:14.255486 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 13 08:27:14.256158 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Nov 13 08:27:14.256292 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Nov 13 08:27:14.260697 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Nov 13 08:27:14.260858 systemd-tmpfiles[1246]: Skipping /boot Nov 13 08:27:14.280108 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Nov 13 08:27:14.280479 systemd-tmpfiles[1246]: Skipping /boot Nov 13 08:27:14.375472 zram_generator::config[1273]: No configuration found. Nov 13 08:27:14.573079 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 08:27:14.665122 systemd[1]: Reloading finished in 445 ms. Nov 13 08:27:14.683891 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 13 08:27:14.690376 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 13 08:27:14.709793 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 13 08:27:14.718957 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 13 08:27:14.724565 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 13 08:27:14.739540 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 13 08:27:14.742236 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 13 08:27:14.749853 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 13 08:27:14.756668 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:27:14.757514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 08:27:14.769767 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 13 08:27:14.777919 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 13 08:27:14.782033 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 13 08:27:14.782867 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 08:27:14.783061 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:27:14.792590 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 13 08:27:14.801818 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:27:14.802070 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 08:27:14.802322 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 08:27:14.802456 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:27:14.817686 systemd[1]: Finished ensure-sysext.service. Nov 13 08:27:14.821101 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:27:14.822202 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 08:27:14.831838 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 13 08:27:14.832924 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 08:27:14.843953 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 13 08:27:14.847028 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:27:14.847847 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 13 08:27:14.849244 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 13 08:27:14.849538 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 13 08:27:14.861309 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 13 08:27:14.866092 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 13 08:27:14.866553 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 13 08:27:14.873737 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 13 08:27:14.881705 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 13 08:27:14.896043 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 13 08:27:14.896246 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 13 08:27:14.897865 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 13 08:27:14.910474 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 13 08:27:14.911524 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 13 08:27:14.919753 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 13 08:27:14.922385 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 13 08:27:14.931730 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Nov 13 08:27:14.935356 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 13 08:27:14.943486 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 13 08:27:14.944802 augenrules[1359]: No rules Nov 13 08:27:14.946947 systemd[1]: audit-rules.service: Deactivated successfully. Nov 13 08:27:14.947210 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 13 08:27:14.985489 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 13 08:27:14.996993 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 13 08:27:15.055341 systemd-resolved[1321]: Positive Trust Anchors: Nov 13 08:27:15.055361 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 13 08:27:15.055653 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 13 08:27:15.065264 systemd-resolved[1321]: Using system hostname 'ci-4152.0.0-d-03c8fd271e'. Nov 13 08:27:15.068177 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 13 08:27:15.069025 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 13 08:27:15.108121 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 13 08:27:15.110362 systemd[1]: Reached target time-set.target - System Time Set. Nov 13 08:27:15.150575 systemd-networkd[1375]: lo: Link UP Nov 13 08:27:15.150602 systemd-networkd[1375]: lo: Gained carrier Nov 13 08:27:15.151544 systemd-networkd[1375]: Enumeration completed Nov 13 08:27:15.151671 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 13 08:27:15.152377 systemd[1]: Reached target network.target - Network. Nov 13 08:27:15.161672 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 13 08:27:15.168459 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1376) Nov 13 08:27:15.179062 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 13 08:27:15.207951 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1376) Nov 13 08:27:15.215429 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 13 08:27:15.216520 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:27:15.216734 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 08:27:15.227486 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 13 08:27:15.230689 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 13 08:27:15.242323 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 13 08:27:15.267807 kernel: ISO 9660 Extensions: RRIP_1991A Nov 13 08:27:15.267970 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 08:27:15.268022 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 13 08:27:15.268039 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:27:15.275737 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 13 08:27:15.290076 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 13 08:27:15.291243 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 13 08:27:15.294139 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 13 08:27:15.295544 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 13 08:27:15.303109 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 13 08:27:15.318434 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1372) Nov 13 08:27:15.318515 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 13 08:27:15.318829 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 13 08:27:15.320175 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 13 08:27:15.349434 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 13 08:27:15.358080 systemd-networkd[1375]: eth0: Configuring with /run/systemd/network/10-86:60:fe:71:79:54.network. Nov 13 08:27:15.360144 systemd-networkd[1375]: eth0: Link UP Nov 13 08:27:15.360157 systemd-networkd[1375]: eth0: Gained carrier Nov 13 08:27:15.368537 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Nov 13 08:27:15.375319 systemd-networkd[1375]: eth1: Configuring with /run/systemd/network/10-ea:0e:02:d8:33:f0.network. Nov 13 08:27:15.375905 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Nov 13 08:27:15.376154 systemd-networkd[1375]: eth1: Link UP Nov 13 08:27:15.376166 systemd-networkd[1375]: eth1: Gained carrier Nov 13 08:27:15.381983 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 13 08:27:15.394687 kernel: ACPI: button: Power Button [PWRF] Nov 13 08:27:15.381892 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Nov 13 08:27:15.383563 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Nov 13 08:27:15.430476 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 13 08:27:15.489432 kernel: mousedev: PS/2 mouse device common for all mice Nov 13 08:27:15.493630 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 08:27:15.510939 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 13 08:27:15.515774 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 13 08:27:15.570035 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 13 08:27:15.584456 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 13 08:27:15.587649 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 13 08:27:15.600825 kernel: Console: switching to colour dummy device 80x25 Nov 13 08:27:15.600976 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 13 08:27:15.601010 kernel: [drm] features: -context_init Nov 13 08:27:15.634844 kernel: [drm] number of scanouts: 1 Nov 13 08:27:15.634988 kernel: [drm] number of cap sets: 0 Nov 13 08:27:15.640219 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 13 08:27:15.640860 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 08:27:15.661480 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Nov 13 08:27:15.664122 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 08:27:15.688279 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 13 08:27:15.688585 kernel: Console: switching to colour frame buffer device 128x48 Nov 13 08:27:15.696429 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 13 08:27:15.715187 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 13 08:27:15.715853 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 08:27:15.730363 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 08:27:15.782100 kernel: EDAC MC: Ver: 3.0.0 Nov 13 08:27:15.802298 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 08:27:15.811931 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 13 08:27:15.825883 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 13 08:27:15.847452 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 13 08:27:15.888326 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 13 08:27:15.889928 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 13 08:27:15.890053 systemd[1]: Reached target sysinit.target - System Initialization. Nov 13 08:27:15.890244 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 13 08:27:15.890368 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 13 08:27:15.890828 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 13 08:27:15.891058 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 13 08:27:15.891183 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 13 08:27:15.891269 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 13 08:27:15.891308 systemd[1]: Reached target paths.target - Path Units. Nov 13 08:27:15.891364 systemd[1]: Reached target timers.target - Timer Units. Nov 13 08:27:15.894708 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 13 08:27:15.898492 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 13 08:27:15.912230 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 13 08:27:15.919843 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 13 08:27:15.923245 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 13 08:27:15.925484 systemd[1]: Reached target sockets.target - Socket Units. Nov 13 08:27:15.927363 systemd[1]: Reached target basic.target - Basic System. Nov 13 08:27:15.927951 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 13 08:27:15.930042 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 13 08:27:15.930118 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 13 08:27:15.936680 systemd[1]: Starting containerd.service - containerd container runtime... Nov 13 08:27:15.947747 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 13 08:27:15.959744 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 13 08:27:15.970810 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 13 08:27:15.980021 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 13 08:27:15.981847 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 13 08:27:15.990933 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 13 08:27:15.998708 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 13 08:27:16.003130 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 13 08:27:16.021553 coreos-metadata[1442]: Nov 13 08:27:16.011 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 13 08:27:16.016745 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 13 08:27:16.042434 coreos-metadata[1442]: Nov 13 08:27:16.028 INFO Fetch successful Nov 13 08:27:16.030494 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 13 08:27:16.032530 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 13 08:27:16.033106 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 13 08:27:16.035825 systemd[1]: Starting update-engine.service - Update Engine... Nov 13 08:27:16.047606 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 13 08:27:16.053231 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 13 08:27:16.076428 jq[1446]: false Nov 13 08:27:16.074033 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 13 08:27:16.074335 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 13 08:27:16.090021 update_engine[1453]: I20241113 08:27:16.078483 1453 main.cc:92] Flatcar Update Engine starting Nov 13 08:27:16.089777 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 13 08:27:16.084973 dbus-daemon[1443]: [system] SELinux support is enabled Nov 13 08:27:16.092698 update_engine[1453]: I20241113 08:27:16.091423 1453 update_check_scheduler.cc:74] Next update check in 5m50s Nov 13 08:27:16.098355 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 13 08:27:16.131990 jq[1454]: true Nov 13 08:27:16.098734 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 13 08:27:16.155805 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 13 08:27:16.159322 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 13 08:27:16.159360 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 13 08:27:16.159974 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 13 08:27:16.160061 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 13 08:27:16.160078 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 13 08:27:16.173447 extend-filesystems[1447]: Found loop4 Nov 13 08:27:16.173447 extend-filesystems[1447]: Found loop5 Nov 13 08:27:16.173447 extend-filesystems[1447]: Found loop6 Nov 13 08:27:16.173447 extend-filesystems[1447]: Found loop7 Nov 13 08:27:16.173447 extend-filesystems[1447]: Found vda Nov 13 08:27:16.173447 extend-filesystems[1447]: Found vda1 Nov 13 08:27:16.173447 extend-filesystems[1447]: Found vda2 Nov 13 08:27:16.173447 extend-filesystems[1447]: Found vda3 Nov 13 08:27:16.173447 extend-filesystems[1447]: Found usr Nov 13 08:27:16.173447 extend-filesystems[1447]: Found vda4 Nov 13 08:27:16.173447 extend-filesystems[1447]: Found vda6 Nov 13 08:27:16.173447 extend-filesystems[1447]: Found vda7 Nov 13 08:27:16.173447 extend-filesystems[1447]: Found vda9 Nov 13 08:27:16.173447 extend-filesystems[1447]: Checking size of /dev/vda9 Nov 13 08:27:16.196545 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 13 08:27:16.259471 tar[1470]: linux-amd64/helm Nov 13 08:27:16.213880 systemd[1]: motdgen.service: Deactivated successfully. Nov 13 08:27:16.214212 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 13 08:27:16.215629 (ntainerd)[1479]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 13 08:27:16.268693 jq[1473]: true Nov 13 08:27:16.227813 systemd[1]: Started update-engine.service - Update Engine. Nov 13 08:27:16.235899 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 13 08:27:16.248752 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 13 08:27:16.304301 extend-filesystems[1447]: Resized partition /dev/vda9 Nov 13 08:27:16.307244 extend-filesystems[1491]: resize2fs 1.47.1 (20-May-2024) Nov 13 08:27:16.324228 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 13 08:27:16.413515 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1384) Nov 13 08:27:16.511703 systemd-logind[1452]: New seat seat0. Nov 13 08:27:16.521449 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 13 08:27:16.577583 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Nov 13 08:27:16.577610 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 13 08:27:16.579009 systemd[1]: Started systemd-logind.service - User Login Management. Nov 13 08:27:16.586025 extend-filesystems[1491]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 13 08:27:16.586025 extend-filesystems[1491]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 13 08:27:16.586025 extend-filesystems[1491]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 13 08:27:16.603830 extend-filesystems[1447]: Resized filesystem in /dev/vda9 Nov 13 08:27:16.603830 extend-filesystems[1447]: Found vdb Nov 13 08:27:16.592843 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 13 08:27:16.593548 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 13 08:27:16.609920 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 13 08:27:16.616546 bash[1506]: Updated "/home/core/.ssh/authorized_keys" Nov 13 08:27:16.616569 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 13 08:27:16.635028 systemd[1]: Starting sshkeys.service... Nov 13 08:27:16.657565 systemd-networkd[1375]: eth1: Gained IPv6LL Nov 13 08:27:16.659046 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Nov 13 08:27:16.665533 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 13 08:27:16.675128 systemd[1]: Reached target network-online.target - Network is Online. Nov 13 08:27:16.690727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:27:16.702090 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 13 08:27:16.746778 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 13 08:27:16.759088 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 13 08:27:16.833484 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 13 08:27:16.874218 coreos-metadata[1524]: Nov 13 08:27:16.871 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 13 08:27:16.887530 coreos-metadata[1524]: Nov 13 08:27:16.886 INFO Fetch successful Nov 13 08:27:16.913148 unknown[1524]: wrote ssh authorized keys file for user: core Nov 13 08:27:16.943695 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 13 08:27:16.975505 containerd[1479]: time="2024-11-13T08:27:16.975064620Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Nov 13 08:27:16.994001 update-ssh-keys[1533]: Updated "/home/core/.ssh/authorized_keys" Nov 13 08:27:16.994678 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 13 08:27:17.000510 systemd[1]: Finished sshkeys.service. Nov 13 08:27:17.070502 containerd[1479]: time="2024-11-13T08:27:17.070432683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 13 08:27:17.077986 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 13 08:27:17.080227 containerd[1479]: time="2024-11-13T08:27:17.080160210Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 13 08:27:17.080464 containerd[1479]: time="2024-11-13T08:27:17.080442593Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 13 08:27:17.080528 containerd[1479]: time="2024-11-13T08:27:17.080516014Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 13 08:27:17.080799 containerd[1479]: time="2024-11-13T08:27:17.080759010Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 13 08:27:17.084428 containerd[1479]: time="2024-11-13T08:27:17.082527235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 13 08:27:17.084428 containerd[1479]: time="2024-11-13T08:27:17.082772570Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 08:27:17.084428 containerd[1479]: time="2024-11-13T08:27:17.082797704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 13 08:27:17.084428 containerd[1479]: time="2024-11-13T08:27:17.083086453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 08:27:17.084428 containerd[1479]: time="2024-11-13T08:27:17.083110719Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 13 08:27:17.084428 containerd[1479]: time="2024-11-13T08:27:17.083130864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 08:27:17.084428 containerd[1479]: time="2024-11-13T08:27:17.083146772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 13 08:27:17.084428 containerd[1479]: time="2024-11-13T08:27:17.083291335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 13 08:27:17.084428 containerd[1479]: time="2024-11-13T08:27:17.083936696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 13 08:27:17.084428 containerd[1479]: time="2024-11-13T08:27:17.084229241Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 08:27:17.084428 containerd[1479]: time="2024-11-13T08:27:17.084255815Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 13 08:27:17.088466 containerd[1479]: time="2024-11-13T08:27:17.087882481Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 13 08:27:17.088466 containerd[1479]: time="2024-11-13T08:27:17.088139517Z" level=info msg="metadata content store policy set" policy=shared Nov 13 08:27:17.093412 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 13 08:27:17.100930 systemd[1]: Started sshd@0-209.38.128.242:22-139.178.89.65:34158.service - OpenSSH per-connection server daemon (139.178.89.65:34158). Nov 13 08:27:17.110858 containerd[1479]: time="2024-11-13T08:27:17.109781292Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 13 08:27:17.110858 containerd[1479]: time="2024-11-13T08:27:17.109925437Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 13 08:27:17.110858 containerd[1479]: time="2024-11-13T08:27:17.109954093Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 13 08:27:17.110858 containerd[1479]: time="2024-11-13T08:27:17.109974474Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 13 08:27:17.110858 containerd[1479]: time="2024-11-13T08:27:17.109994280Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 13 08:27:17.110858 containerd[1479]: time="2024-11-13T08:27:17.110244899Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 13 08:27:17.121262 containerd[1479]: time="2024-11-13T08:27:17.120982544Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 13 08:27:17.121859 systemd[1]: issuegen.service: Deactivated successfully. Nov 13 08:27:17.122125 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 13 08:27:17.130088 containerd[1479]: time="2024-11-13T08:27:17.129009395Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 13 08:27:17.130088 containerd[1479]: time="2024-11-13T08:27:17.129069594Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 13 08:27:17.130088 containerd[1479]: time="2024-11-13T08:27:17.129098458Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 13 08:27:17.130088 containerd[1479]: time="2024-11-13T08:27:17.129121683Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 13 08:27:17.130088 containerd[1479]: time="2024-11-13T08:27:17.129143460Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 13 08:27:17.130088 containerd[1479]: time="2024-11-13T08:27:17.129162115Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 13 08:27:17.130088 containerd[1479]: time="2024-11-13T08:27:17.129185439Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 13 08:27:17.130088 containerd[1479]: time="2024-11-13T08:27:17.129212466Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 13 08:27:17.130088 containerd[1479]: time="2024-11-13T08:27:17.129228284Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 13 08:27:17.130088 containerd[1479]: time="2024-11-13T08:27:17.129245565Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 13 08:27:17.130088 containerd[1479]: time="2024-11-13T08:27:17.129262860Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 13 08:27:17.130088 containerd[1479]: time="2024-11-13T08:27:17.129288926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 13 08:27:17.130088 containerd[1479]: time="2024-11-13T08:27:17.129309734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 13 08:27:17.130088 containerd[1479]: time="2024-11-13T08:27:17.129328286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 13 08:27:17.129888 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 13 08:27:17.130614 containerd[1479]: time="2024-11-13T08:27:17.129349208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 13 08:27:17.130614 containerd[1479]: time="2024-11-13T08:27:17.129376052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 13 08:27:17.142810 containerd[1479]: time="2024-11-13T08:27:17.141556368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 13 08:27:17.142810 containerd[1479]: time="2024-11-13T08:27:17.141611050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 13 08:27:17.142810 containerd[1479]: time="2024-11-13T08:27:17.141628965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 13 08:27:17.142810 containerd[1479]: time="2024-11-13T08:27:17.141643448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 13 08:27:17.142810 containerd[1479]: time="2024-11-13T08:27:17.141662770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 13 08:27:17.142810 containerd[1479]: time="2024-11-13T08:27:17.141676609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 13 08:27:17.142810 containerd[1479]: time="2024-11-13T08:27:17.141691058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 13 08:27:17.142810 containerd[1479]: time="2024-11-13T08:27:17.141704159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 13 08:27:17.142810 containerd[1479]: time="2024-11-13T08:27:17.141718558Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 13 08:27:17.142810 containerd[1479]: time="2024-11-13T08:27:17.141749879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 13 08:27:17.142810 containerd[1479]: time="2024-11-13T08:27:17.141765398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 13 08:27:17.142810 containerd[1479]: time="2024-11-13T08:27:17.141776766Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 13 08:27:17.142810 containerd[1479]: time="2024-11-13T08:27:17.141832481Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 13 08:27:17.142810 containerd[1479]: time="2024-11-13T08:27:17.141856683Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 13 08:27:17.143382 containerd[1479]: time="2024-11-13T08:27:17.141869578Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 13 08:27:17.143382 containerd[1479]: time="2024-11-13T08:27:17.141882052Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 13 08:27:17.143382 containerd[1479]: time="2024-11-13T08:27:17.141894529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 13 08:27:17.143382 containerd[1479]: time="2024-11-13T08:27:17.141909871Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 13 08:27:17.143382 containerd[1479]: time="2024-11-13T08:27:17.141922237Z" level=info msg="NRI interface is disabled by configuration." Nov 13 08:27:17.143382 containerd[1479]: time="2024-11-13T08:27:17.141969916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 13 08:27:17.143541 containerd[1479]: time="2024-11-13T08:27:17.142463732Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 13 08:27:17.143541 containerd[1479]: time="2024-11-13T08:27:17.142527479Z" level=info msg="Connect containerd service" Nov 13 08:27:17.143541 containerd[1479]: time="2024-11-13T08:27:17.142603289Z" level=info msg="using legacy CRI server" Nov 13 08:27:17.143541 containerd[1479]: time="2024-11-13T08:27:17.142616316Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 13 08:27:17.143541 containerd[1479]: time="2024-11-13T08:27:17.142756253Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 13 08:27:17.152461 containerd[1479]: time="2024-11-13T08:27:17.150972454Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 13 08:27:17.152461 containerd[1479]: time="2024-11-13T08:27:17.151221923Z" level=info msg="Start subscribing containerd event" Nov 13 08:27:17.152461 containerd[1479]: time="2024-11-13T08:27:17.151301973Z" level=info msg="Start recovering state" Nov 13 08:27:17.152461 containerd[1479]: time="2024-11-13T08:27:17.151416160Z" level=info msg="Start event monitor" Nov 13 08:27:17.152461 containerd[1479]: time="2024-11-13T08:27:17.151444115Z" level=info msg="Start snapshots syncer" Nov 13 08:27:17.152461 containerd[1479]: time="2024-11-13T08:27:17.151456339Z" level=info msg="Start cni network conf syncer for default" Nov 13 08:27:17.152461 containerd[1479]: time="2024-11-13T08:27:17.151464625Z" level=info msg="Start streaming server" Nov 13 08:27:17.152461 containerd[1479]: time="2024-11-13T08:27:17.151669280Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 13 08:27:17.152461 containerd[1479]: time="2024-11-13T08:27:17.151753659Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 13 08:27:17.152461 containerd[1479]: time="2024-11-13T08:27:17.152190405Z" level=info msg="containerd successfully booted in 0.185166s" Nov 13 08:27:17.154601 systemd[1]: Started containerd.service - containerd container runtime. Nov 13 08:27:17.169559 systemd-networkd[1375]: eth0: Gained IPv6LL Nov 13 08:27:17.170043 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Nov 13 08:27:17.230005 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 13 08:27:17.244731 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 13 08:27:17.255647 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 13 08:27:17.259044 systemd[1]: Reached target getty.target - Login Prompts. Nov 13 08:27:17.341359 sshd[1549]: Accepted publickey for core from 139.178.89.65 port 34158 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:27:17.344064 sshd-session[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:27:17.368242 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 13 08:27:17.379332 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 13 08:27:17.393890 systemd-logind[1452]: New session 1 of user core. Nov 13 08:27:17.442536 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 13 08:27:17.459059 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 13 08:27:17.485366 (systemd)[1562]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 13 08:27:17.670229 systemd[1562]: Queued start job for default target default.target. Nov 13 08:27:17.676137 systemd[1562]: Created slice app.slice - User Application Slice. Nov 13 08:27:17.676195 systemd[1562]: Reached target paths.target - Paths. Nov 13 08:27:17.676211 systemd[1562]: Reached target timers.target - Timers. Nov 13 08:27:17.680687 systemd[1562]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 13 08:27:17.706912 systemd[1562]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 13 08:27:17.707112 systemd[1562]: Reached target sockets.target - Sockets. Nov 13 08:27:17.707138 systemd[1562]: Reached target basic.target - Basic System. Nov 13 08:27:17.707258 systemd[1562]: Reached target default.target - Main User Target. Nov 13 08:27:17.707309 systemd[1562]: Startup finished in 202ms. Nov 13 08:27:17.708035 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 13 08:27:17.719822 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 13 08:27:17.816210 systemd[1]: Started sshd@1-209.38.128.242:22-139.178.89.65:60306.service - OpenSSH per-connection server daemon (139.178.89.65:60306). Nov 13 08:27:17.918487 tar[1470]: linux-amd64/LICENSE Nov 13 08:27:17.918487 tar[1470]: linux-amd64/README.md Nov 13 08:27:17.923493 sshd[1573]: Accepted publickey for core from 139.178.89.65 port 60306 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:27:17.929227 sshd-session[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:27:17.940700 systemd-logind[1452]: New session 2 of user core. Nov 13 08:27:17.942638 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 13 08:27:17.950618 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 13 08:27:18.022964 sshd[1578]: Connection closed by 139.178.89.65 port 60306 Nov 13 08:27:18.024687 sshd-session[1573]: pam_unix(sshd:session): session closed for user core Nov 13 08:27:18.037877 systemd[1]: sshd@1-209.38.128.242:22-139.178.89.65:60306.service: Deactivated successfully. Nov 13 08:27:18.043112 systemd[1]: session-2.scope: Deactivated successfully. Nov 13 08:27:18.046701 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Nov 13 08:27:18.059857 systemd[1]: Started sshd@2-209.38.128.242:22-139.178.89.65:60312.service - OpenSSH per-connection server daemon (139.178.89.65:60312). Nov 13 08:27:18.069313 systemd-logind[1452]: Removed session 2. Nov 13 08:27:18.128749 sshd[1583]: Accepted publickey for core from 139.178.89.65 port 60312 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:27:18.130453 sshd-session[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:27:18.142501 systemd-logind[1452]: New session 3 of user core. Nov 13 08:27:18.146836 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 13 08:27:18.222207 sshd[1585]: Connection closed by 139.178.89.65 port 60312 Nov 13 08:27:18.224714 sshd-session[1583]: pam_unix(sshd:session): session closed for user core Nov 13 08:27:18.229921 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Nov 13 08:27:18.231104 systemd[1]: sshd@2-209.38.128.242:22-139.178.89.65:60312.service: Deactivated successfully. Nov 13 08:27:18.235462 systemd[1]: session-3.scope: Deactivated successfully. Nov 13 08:27:18.240619 systemd-logind[1452]: Removed session 3. Nov 13 08:27:18.527851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:27:18.531808 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 13 08:27:18.536303 systemd[1]: Startup finished in 1.621s (kernel) + 6.758s (initrd) + 7.048s (userspace) = 15.428s. Nov 13 08:27:18.537706 (kubelet)[1594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 08:27:19.438347 kubelet[1594]: E1113 08:27:19.438127 1594 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 08:27:19.441515 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 08:27:19.441730 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 08:27:19.442262 systemd[1]: kubelet.service: Consumed 1.624s CPU time. Nov 13 08:27:28.241918 systemd[1]: Started sshd@3-209.38.128.242:22-139.178.89.65:46040.service - OpenSSH per-connection server daemon (139.178.89.65:46040). Nov 13 08:27:28.306565 sshd[1607]: Accepted publickey for core from 139.178.89.65 port 46040 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:27:28.309162 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:27:28.316301 systemd-logind[1452]: New session 4 of user core. Nov 13 08:27:28.323934 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 13 08:27:28.388156 sshd[1609]: Connection closed by 139.178.89.65 port 46040 Nov 13 08:27:28.389219 sshd-session[1607]: pam_unix(sshd:session): session closed for user core Nov 13 08:27:28.408094 systemd[1]: sshd@3-209.38.128.242:22-139.178.89.65:46040.service: Deactivated successfully. Nov 13 08:27:28.411988 systemd[1]: session-4.scope: Deactivated successfully. Nov 13 08:27:28.413987 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Nov 13 08:27:28.421049 systemd[1]: Started sshd@4-209.38.128.242:22-139.178.89.65:46056.service - OpenSSH per-connection server daemon (139.178.89.65:46056). Nov 13 08:27:28.422887 systemd-logind[1452]: Removed session 4. Nov 13 08:27:28.494957 sshd[1614]: Accepted publickey for core from 139.178.89.65 port 46056 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:27:28.497376 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:27:28.505502 systemd-logind[1452]: New session 5 of user core. Nov 13 08:27:28.517882 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 13 08:27:28.586934 sshd[1616]: Connection closed by 139.178.89.65 port 46056 Nov 13 08:27:28.587892 sshd-session[1614]: pam_unix(sshd:session): session closed for user core Nov 13 08:27:28.607249 systemd[1]: sshd@4-209.38.128.242:22-139.178.89.65:46056.service: Deactivated successfully. Nov 13 08:27:28.610417 systemd[1]: session-5.scope: Deactivated successfully. Nov 13 08:27:28.615899 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Nov 13 08:27:28.631289 systemd[1]: Started sshd@5-209.38.128.242:22-139.178.89.65:46062.service - OpenSSH per-connection server daemon (139.178.89.65:46062). Nov 13 08:27:28.633804 systemd-logind[1452]: Removed session 5. Nov 13 08:27:28.695106 sshd[1621]: Accepted publickey for core from 139.178.89.65 port 46062 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:27:28.699794 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:27:28.708889 systemd-logind[1452]: New session 6 of user core. Nov 13 08:27:28.718927 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 13 08:27:28.795556 sshd[1623]: Connection closed by 139.178.89.65 port 46062 Nov 13 08:27:28.796721 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Nov 13 08:27:28.807560 systemd[1]: sshd@5-209.38.128.242:22-139.178.89.65:46062.service: Deactivated successfully. Nov 13 08:27:28.810688 systemd[1]: session-6.scope: Deactivated successfully. Nov 13 08:27:28.813498 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Nov 13 08:27:28.820916 systemd[1]: Started sshd@6-209.38.128.242:22-139.178.89.65:46076.service - OpenSSH per-connection server daemon (139.178.89.65:46076). Nov 13 08:27:28.822748 systemd-logind[1452]: Removed session 6. Nov 13 08:27:28.895926 sshd[1628]: Accepted publickey for core from 139.178.89.65 port 46076 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:27:28.898174 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:27:28.908564 systemd-logind[1452]: New session 7 of user core. Nov 13 08:27:28.914964 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 13 08:27:28.993983 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 13 08:27:28.994611 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 08:27:29.012127 sudo[1631]: pam_unix(sudo:session): session closed for user root Nov 13 08:27:29.018126 sshd[1630]: Connection closed by 139.178.89.65 port 46076 Nov 13 08:27:29.016997 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Nov 13 08:27:29.029470 systemd[1]: sshd@6-209.38.128.242:22-139.178.89.65:46076.service: Deactivated successfully. Nov 13 08:27:29.033698 systemd[1]: session-7.scope: Deactivated successfully. Nov 13 08:27:29.037862 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Nov 13 08:27:29.045061 systemd[1]: Started sshd@7-209.38.128.242:22-139.178.89.65:46092.service - OpenSSH per-connection server daemon (139.178.89.65:46092). Nov 13 08:27:29.048804 systemd-logind[1452]: Removed session 7. Nov 13 08:27:29.108319 sshd[1636]: Accepted publickey for core from 139.178.89.65 port 46092 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:27:29.111120 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:27:29.120830 systemd-logind[1452]: New session 8 of user core. Nov 13 08:27:29.130931 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 13 08:27:29.198037 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 13 08:27:29.198648 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 08:27:29.205922 sudo[1640]: pam_unix(sudo:session): session closed for user root Nov 13 08:27:29.217057 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 13 08:27:29.218195 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 08:27:29.243163 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 13 08:27:29.293372 augenrules[1662]: No rules Nov 13 08:27:29.295350 systemd[1]: audit-rules.service: Deactivated successfully. Nov 13 08:27:29.295601 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 13 08:27:29.297851 sudo[1639]: pam_unix(sudo:session): session closed for user root Nov 13 08:27:29.301427 sshd[1638]: Connection closed by 139.178.89.65 port 46092 Nov 13 08:27:29.302272 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Nov 13 08:27:29.312985 systemd[1]: sshd@7-209.38.128.242:22-139.178.89.65:46092.service: Deactivated successfully. Nov 13 08:27:29.315736 systemd[1]: session-8.scope: Deactivated successfully. Nov 13 08:27:29.318790 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Nov 13 08:27:29.329712 systemd[1]: Started sshd@8-209.38.128.242:22-139.178.89.65:46094.service - OpenSSH per-connection server daemon (139.178.89.65:46094). Nov 13 08:27:29.332722 systemd-logind[1452]: Removed session 8. Nov 13 08:27:29.389905 sshd[1670]: Accepted publickey for core from 139.178.89.65 port 46094 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:27:29.393006 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:27:29.401457 systemd-logind[1452]: New session 9 of user core. Nov 13 08:27:29.407885 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 13 08:27:29.471976 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 13 08:27:29.472534 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 08:27:29.473893 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 13 08:27:29.484951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:27:29.845692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:27:29.861079 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 08:27:29.999452 kubelet[1689]: E1113 08:27:29.999322 1689 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 08:27:30.006125 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 08:27:30.006382 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 08:27:30.316685 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 13 08:27:30.331459 (dockerd)[1708]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 13 08:27:30.932542 dockerd[1708]: time="2024-11-13T08:27:30.932178474Z" level=info msg="Starting up" Nov 13 08:27:31.223363 dockerd[1708]: time="2024-11-13T08:27:31.223056786Z" level=info msg="Loading containers: start." Nov 13 08:27:31.520474 kernel: Initializing XFRM netlink socket Nov 13 08:27:31.570587 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Nov 13 08:27:32.181183 systemd-resolved[1321]: Clock change detected. Flushing caches. Nov 13 08:27:32.181648 systemd-timesyncd[1342]: Contacted time server 173.73.96.68:123 (2.flatcar.pool.ntp.org). Nov 13 08:27:32.181750 systemd-timesyncd[1342]: Initial clock synchronization to Wed 2024-11-13 08:27:32.181084 UTC. Nov 13 08:27:32.200082 systemd-networkd[1375]: docker0: Link UP Nov 13 08:27:32.256724 dockerd[1708]: time="2024-11-13T08:27:32.256384031Z" level=info msg="Loading containers: done." Nov 13 08:27:32.293891 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2629381844-merged.mount: Deactivated successfully. Nov 13 08:27:32.335816 dockerd[1708]: time="2024-11-13T08:27:32.335684912Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 13 08:27:32.336142 dockerd[1708]: time="2024-11-13T08:27:32.335838351Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Nov 13 08:27:32.336203 dockerd[1708]: time="2024-11-13T08:27:32.336149119Z" level=info msg="Daemon has completed initialization" Nov 13 08:27:32.445896 dockerd[1708]: time="2024-11-13T08:27:32.445697860Z" level=info msg="API listen on /run/docker.sock" Nov 13 08:27:32.446315 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 13 08:27:33.902081 containerd[1479]: time="2024-11-13T08:27:33.898456507Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.6\"" Nov 13 08:27:34.722830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3053812092.mount: Deactivated successfully. Nov 13 08:27:37.230235 containerd[1479]: time="2024-11-13T08:27:37.230135642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:37.234299 containerd[1479]: time="2024-11-13T08:27:37.234009217Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.6: active requests=0, bytes read=32676443" Nov 13 08:27:37.236761 containerd[1479]: time="2024-11-13T08:27:37.236642547Z" level=info msg="ImageCreate event name:\"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:37.244640 containerd[1479]: time="2024-11-13T08:27:37.244514038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:3a820898379831ecff7cf4ce4954bb7a6505988eefcef146fd1ee2f56a01cdbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:37.246727 containerd[1479]: time="2024-11-13T08:27:37.245838152Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.6\" with image id \"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:3a820898379831ecff7cf4ce4954bb7a6505988eefcef146fd1ee2f56a01cdbb\", size \"32673243\" in 3.347319079s" Nov 13 08:27:37.246727 containerd[1479]: time="2024-11-13T08:27:37.245947890Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.6\" returns image reference \"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\"" Nov 13 08:27:37.287680 containerd[1479]: time="2024-11-13T08:27:37.287246070Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.6\"" Nov 13 08:27:40.210028 containerd[1479]: time="2024-11-13T08:27:40.209873638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:40.212031 containerd[1479]: time="2024-11-13T08:27:40.211488619Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.6: active requests=0, bytes read=29605796" Nov 13 08:27:40.214518 containerd[1479]: time="2024-11-13T08:27:40.213693294Z" level=info msg="ImageCreate event name:\"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:40.221154 containerd[1479]: time="2024-11-13T08:27:40.221047896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a412c3cdf35d39c8d37748b457a486faae7c5f2ee1d1ba2059c709bc5534686\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:40.224638 containerd[1479]: time="2024-11-13T08:27:40.224072405Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.6\" with image id \"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a412c3cdf35d39c8d37748b457a486faae7c5f2ee1d1ba2059c709bc5534686\", size \"31051162\" in 2.936744483s" Nov 13 08:27:40.224638 containerd[1479]: time="2024-11-13T08:27:40.224158499Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.6\" returns image reference \"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\"" Nov 13 08:27:40.277759 containerd[1479]: time="2024-11-13T08:27:40.277655743Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.6\"" Nov 13 08:27:40.280727 systemd-resolved[1321]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 13 08:27:40.778230 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 13 08:27:40.792379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:27:41.022197 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:27:41.036755 (kubelet)[1983]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 08:27:41.113849 kubelet[1983]: E1113 08:27:41.113678 1983 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 08:27:41.118350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 08:27:41.119124 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 08:27:42.298313 containerd[1479]: time="2024-11-13T08:27:42.298160022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:42.300322 containerd[1479]: time="2024-11-13T08:27:42.300236220Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.6: active requests=0, bytes read=17784244" Nov 13 08:27:42.302965 containerd[1479]: time="2024-11-13T08:27:42.302096249Z" level=info msg="ImageCreate event name:\"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:42.307642 containerd[1479]: time="2024-11-13T08:27:42.307561751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:948395c284d82c985f2dc0d99b5b51b3ca85eba97003babbc73834e0ab91fa59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:42.310050 containerd[1479]: time="2024-11-13T08:27:42.309965050Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.6\" with image id \"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:948395c284d82c985f2dc0d99b5b51b3ca85eba97003babbc73834e0ab91fa59\", size \"19229628\" in 2.032236403s" Nov 13 08:27:42.310050 containerd[1479]: time="2024-11-13T08:27:42.310042205Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.6\" returns image reference \"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\"" Nov 13 08:27:42.363573 containerd[1479]: time="2024-11-13T08:27:42.363446769Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.6\"" Nov 13 08:27:43.353522 systemd-resolved[1321]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 13 08:27:43.927145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4230917049.mount: Deactivated successfully. Nov 13 08:27:44.716736 containerd[1479]: time="2024-11-13T08:27:44.716522456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:44.718352 containerd[1479]: time="2024-11-13T08:27:44.718269415Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.6: active requests=0, bytes read=29054624" Nov 13 08:27:44.720019 containerd[1479]: time="2024-11-13T08:27:44.719951268Z" level=info msg="ImageCreate event name:\"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:44.728080 containerd[1479]: time="2024-11-13T08:27:44.727799281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:aaf790f611159ab21713affc2c5676f742c9b31db26dd2e61e46c4257dd11b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:44.728984 containerd[1479]: time="2024-11-13T08:27:44.728737389Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.6\" with image id \"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\", repo tag \"registry.k8s.io/kube-proxy:v1.30.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:aaf790f611159ab21713affc2c5676f742c9b31db26dd2e61e46c4257dd11b76\", size \"29053643\" in 2.365167734s" Nov 13 08:27:44.728984 containerd[1479]: time="2024-11-13T08:27:44.728788976Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.6\" returns image reference \"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\"" Nov 13 08:27:44.762753 containerd[1479]: time="2024-11-13T08:27:44.762363512Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 13 08:27:45.390297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3892851159.mount: Deactivated successfully. Nov 13 08:27:46.720761 containerd[1479]: time="2024-11-13T08:27:46.720628398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:46.722583 containerd[1479]: time="2024-11-13T08:27:46.722493384Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 13 08:27:46.724007 containerd[1479]: time="2024-11-13T08:27:46.723795277Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:46.731973 containerd[1479]: time="2024-11-13T08:27:46.730062645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:46.732470 containerd[1479]: time="2024-11-13T08:27:46.732402784Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.969977617s" Nov 13 08:27:46.732641 containerd[1479]: time="2024-11-13T08:27:46.732612418Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 13 08:27:46.774809 containerd[1479]: time="2024-11-13T08:27:46.774731801Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 13 08:27:46.777360 systemd-resolved[1321]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Nov 13 08:27:47.395860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount199369845.mount: Deactivated successfully. Nov 13 08:27:47.405966 containerd[1479]: time="2024-11-13T08:27:47.404991534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:47.406401 containerd[1479]: time="2024-11-13T08:27:47.406347284Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Nov 13 08:27:47.407394 containerd[1479]: time="2024-11-13T08:27:47.407338972Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:47.411260 containerd[1479]: time="2024-11-13T08:27:47.411193785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:47.412587 containerd[1479]: time="2024-11-13T08:27:47.412515899Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 637.715542ms" Nov 13 08:27:47.412793 containerd[1479]: time="2024-11-13T08:27:47.412754154Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 13 08:27:47.450644 containerd[1479]: time="2024-11-13T08:27:47.450576734Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Nov 13 08:27:48.089024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1254102362.mount: Deactivated successfully. Nov 13 08:27:50.853575 containerd[1479]: time="2024-11-13T08:27:50.853414618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:50.856451 containerd[1479]: time="2024-11-13T08:27:50.856096752Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Nov 13 08:27:50.857967 containerd[1479]: time="2024-11-13T08:27:50.857591436Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:50.863978 containerd[1479]: time="2024-11-13T08:27:50.863866350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:50.868570 containerd[1479]: time="2024-11-13T08:27:50.867554272Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.416916584s" Nov 13 08:27:50.868570 containerd[1479]: time="2024-11-13T08:27:50.867717907Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Nov 13 08:27:51.134498 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 13 08:27:51.142387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:27:51.428466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:27:51.436712 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 08:27:51.561945 kubelet[2139]: E1113 08:27:51.561829 2139 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 08:27:51.566410 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 08:27:51.566688 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 08:27:55.447240 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:27:55.461159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:27:55.490684 systemd[1]: Reloading requested from client PID 2194 ('systemctl') (unit session-9.scope)... Nov 13 08:27:55.490709 systemd[1]: Reloading... Nov 13 08:27:55.650479 zram_generator::config[2231]: No configuration found. Nov 13 08:27:55.896344 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 08:27:55.992019 systemd[1]: Reloading finished in 500 ms. Nov 13 08:27:56.058129 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 13 08:27:56.058222 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 13 08:27:56.058559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:27:56.065449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:27:56.289264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:27:56.300582 (kubelet)[2289]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 13 08:27:56.368331 kubelet[2289]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 08:27:56.368331 kubelet[2289]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 13 08:27:56.368331 kubelet[2289]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 08:27:56.368849 kubelet[2289]: I1113 08:27:56.368426 2289 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 13 08:27:57.044383 kubelet[2289]: I1113 08:27:57.044303 2289 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Nov 13 08:27:57.044637 kubelet[2289]: I1113 08:27:57.044619 2289 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 13 08:27:57.045150 kubelet[2289]: I1113 08:27:57.045124 2289 server.go:927] "Client rotation is on, will bootstrap in background" Nov 13 08:27:57.069959 kubelet[2289]: I1113 08:27:57.069867 2289 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 13 08:27:57.070905 kubelet[2289]: E1113 08:27:57.070864 2289 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://209.38.128.242:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 209.38.128.242:6443: connect: connection refused Nov 13 08:27:57.091081 kubelet[2289]: I1113 08:27:57.091027 2289 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 13 08:27:57.092331 kubelet[2289]: I1113 08:27:57.091767 2289 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 13 08:27:57.092331 kubelet[2289]: I1113 08:27:57.091842 2289 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.0.0-d-03c8fd271e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 13 08:27:57.093093 kubelet[2289]: I1113 08:27:57.093054 2289 topology_manager.go:138] "Creating topology manager with none policy" Nov 13 08:27:57.093292 kubelet[2289]: I1113 08:27:57.093272 2289 container_manager_linux.go:301] "Creating device plugin manager" Nov 13 08:27:57.093856 kubelet[2289]: I1113 08:27:57.093591 2289 state_mem.go:36] "Initialized new in-memory state store" Nov 13 08:27:57.094702 kubelet[2289]: I1113 08:27:57.094642 2289 kubelet.go:400] "Attempting to sync node with API server" Nov 13 08:27:57.094702 kubelet[2289]: I1113 08:27:57.094680 2289 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 13 08:27:57.094885 kubelet[2289]: I1113 08:27:57.094734 2289 kubelet.go:312] "Adding apiserver pod source" Nov 13 08:27:57.094885 kubelet[2289]: I1113 08:27:57.094762 2289 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 13 08:27:57.099419 kubelet[2289]: W1113 08:27:57.099323 2289 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://209.38.128.242:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 209.38.128.242:6443: connect: connection refused Nov 13 08:27:57.099419 kubelet[2289]: E1113 08:27:57.099428 2289 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://209.38.128.242:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 209.38.128.242:6443: connect: connection refused Nov 13 08:27:57.099683 kubelet[2289]: I1113 08:27:57.099648 2289 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 13 08:27:57.103255 kubelet[2289]: I1113 08:27:57.102263 2289 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 13 08:27:57.103255 kubelet[2289]: W1113 08:27:57.102406 2289 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 13 08:27:57.103517 kubelet[2289]: I1113 08:27:57.103455 2289 server.go:1264] "Started kubelet" Nov 13 08:27:57.111379 kubelet[2289]: W1113 08:27:57.111280 2289 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://209.38.128.242:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.0.0-d-03c8fd271e&limit=500&resourceVersion=0": dial tcp 209.38.128.242:6443: connect: connection refused Nov 13 08:27:57.111661 kubelet[2289]: E1113 08:27:57.111636 2289 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://209.38.128.242:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.0.0-d-03c8fd271e&limit=500&resourceVersion=0": dial tcp 209.38.128.242:6443: connect: connection refused Nov 13 08:27:57.112026 kubelet[2289]: I1113 08:27:57.111971 2289 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 13 08:27:57.112420 kubelet[2289]: I1113 08:27:57.112345 2289 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 13 08:27:57.113002 kubelet[2289]: I1113 08:27:57.112964 2289 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 13 08:27:57.113461 kubelet[2289]: E1113 08:27:57.113273 2289 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://209.38.128.242:6443/api/v1/namespaces/default/events\": dial tcp 209.38.128.242:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.0.0-d-03c8fd271e.180779c6f124456c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.0.0-d-03c8fd271e,UID:ci-4152.0.0-d-03c8fd271e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.0.0-d-03c8fd271e,},FirstTimestamp:2024-11-13 08:27:57.103416684 +0000 UTC m=+0.797726755,LastTimestamp:2024-11-13 08:27:57.103416684 +0000 UTC m=+0.797726755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.0.0-d-03c8fd271e,}" Nov 13 08:27:57.114527 kubelet[2289]: I1113 08:27:57.114449 2289 server.go:455] "Adding debug handlers to kubelet server" Nov 13 08:27:57.119509 kubelet[2289]: I1113 08:27:57.116713 2289 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 13 08:27:57.121807 kubelet[2289]: I1113 08:27:57.121749 2289 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 13 08:27:57.124519 kubelet[2289]: I1113 08:27:57.124489 2289 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 13 08:27:57.124759 kubelet[2289]: I1113 08:27:57.124748 2289 reconciler.go:26] "Reconciler: start to sync state" Nov 13 08:27:57.125415 kubelet[2289]: W1113 08:27:57.125332 2289 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://209.38.128.242:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.128.242:6443: connect: connection refused Nov 13 08:27:57.125692 kubelet[2289]: E1113 08:27:57.125671 2289 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://209.38.128.242:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.128.242:6443: connect: connection refused Nov 13 08:27:57.127164 kubelet[2289]: E1113 08:27:57.127137 2289 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 13 08:27:57.127607 kubelet[2289]: E1113 08:27:57.127591 2289 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.0.0-d-03c8fd271e\" not found" Nov 13 08:27:57.128213 kubelet[2289]: E1113 08:27:57.128138 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.128.242:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.0.0-d-03c8fd271e?timeout=10s\": dial tcp 209.38.128.242:6443: connect: connection refused" interval="200ms" Nov 13 08:27:57.129171 kubelet[2289]: I1113 08:27:57.129150 2289 factory.go:221] Registration of the systemd container factory successfully Nov 13 08:27:57.129641 kubelet[2289]: I1113 08:27:57.129613 2289 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 13 08:27:57.131525 kubelet[2289]: I1113 08:27:57.131490 2289 factory.go:221] Registration of the containerd container factory successfully Nov 13 08:27:57.160726 kubelet[2289]: I1113 08:27:57.160651 2289 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 13 08:27:57.165037 kubelet[2289]: I1113 08:27:57.164973 2289 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 13 08:27:57.165037 kubelet[2289]: I1113 08:27:57.165038 2289 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 13 08:27:57.165273 kubelet[2289]: I1113 08:27:57.165069 2289 kubelet.go:2337] "Starting kubelet main sync loop" Nov 13 08:27:57.165273 kubelet[2289]: E1113 08:27:57.165149 2289 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 13 08:27:57.167956 kubelet[2289]: W1113 08:27:57.167131 2289 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://209.38.128.242:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.128.242:6443: connect: connection refused Nov 13 08:27:57.167956 kubelet[2289]: E1113 08:27:57.167813 2289 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://209.38.128.242:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.128.242:6443: connect: connection refused Nov 13 08:27:57.172050 kubelet[2289]: I1113 08:27:57.170520 2289 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 13 08:27:57.172050 kubelet[2289]: I1113 08:27:57.170539 2289 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 13 08:27:57.172050 kubelet[2289]: I1113 08:27:57.170562 2289 state_mem.go:36] "Initialized new in-memory state store" Nov 13 08:27:57.173951 kubelet[2289]: I1113 08:27:57.173896 2289 policy_none.go:49] "None policy: Start" Nov 13 08:27:57.175043 kubelet[2289]: I1113 08:27:57.174936 2289 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 13 08:27:57.175043 kubelet[2289]: I1113 08:27:57.175019 2289 state_mem.go:35] "Initializing new in-memory state store" Nov 13 08:27:57.191683 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 13 08:27:57.205473 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 13 08:27:57.211341 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 13 08:27:57.221465 kubelet[2289]: I1113 08:27:57.221417 2289 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 13 08:27:57.221738 kubelet[2289]: I1113 08:27:57.221691 2289 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 13 08:27:57.221859 kubelet[2289]: I1113 08:27:57.221844 2289 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 13 08:27:57.226019 kubelet[2289]: E1113 08:27:57.225973 2289 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.0.0-d-03c8fd271e\" not found" Nov 13 08:27:57.229069 kubelet[2289]: I1113 08:27:57.229006 2289 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.0.0-d-03c8fd271e" Nov 13 08:27:57.229551 kubelet[2289]: E1113 08:27:57.229492 2289 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://209.38.128.242:6443/api/v1/nodes\": dial tcp 209.38.128.242:6443: connect: connection refused" node="ci-4152.0.0-d-03c8fd271e" Nov 13 08:27:57.266189 kubelet[2289]: I1113 08:27:57.266076 2289 topology_manager.go:215] "Topology Admit Handler" podUID="61611bb57d653ac3fccafe3fd856e728" podNamespace="kube-system" podName="kube-apiserver-ci-4152.0.0-d-03c8fd271e" Nov 13 08:27:57.268408 kubelet[2289]: I1113 08:27:57.267783 2289 topology_manager.go:215] "Topology Admit Handler" podUID="c1dcd674b00bce064adc934e1d963fc2" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.0.0-d-03c8fd271e" Nov 13 08:27:57.269175 kubelet[2289]: I1113 08:27:57.269140 2289 topology_manager.go:215] "Topology Admit Handler" podUID="a6d4adf6aedc538b7914a0c6d8d4393f" podNamespace="kube-system" podName="kube-scheduler-ci-4152.0.0-d-03c8fd271e" Nov 13 08:27:57.280377 systemd[1]: Created slice kubepods-burstable-pod61611bb57d653ac3fccafe3fd856e728.slice - libcontainer container kubepods-burstable-pod61611bb57d653ac3fccafe3fd856e728.slice. Nov 13 08:27:57.296790 systemd[1]: Created slice kubepods-burstable-podc1dcd674b00bce064adc934e1d963fc2.slice - libcontainer container kubepods-burstable-podc1dcd674b00bce064adc934e1d963fc2.slice. Nov 13 08:27:57.316060 systemd[1]: Created slice kubepods-burstable-poda6d4adf6aedc538b7914a0c6d8d4393f.slice - libcontainer container kubepods-burstable-poda6d4adf6aedc538b7914a0c6d8d4393f.slice. Nov 13 08:27:57.325425 kubelet[2289]: I1113 08:27:57.325219 2289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1dcd674b00bce064adc934e1d963fc2-ca-certs\") pod \"kube-controller-manager-ci-4152.0.0-d-03c8fd271e\" (UID: \"c1dcd674b00bce064adc934e1d963fc2\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-d-03c8fd271e" Nov 13 08:27:57.325425 kubelet[2289]: I1113 08:27:57.325273 2289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61611bb57d653ac3fccafe3fd856e728-ca-certs\") pod \"kube-apiserver-ci-4152.0.0-d-03c8fd271e\" (UID: \"61611bb57d653ac3fccafe3fd856e728\") " pod="kube-system/kube-apiserver-ci-4152.0.0-d-03c8fd271e" Nov 13 08:27:57.325425 kubelet[2289]: I1113 08:27:57.325293 2289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61611bb57d653ac3fccafe3fd856e728-k8s-certs\") pod \"kube-apiserver-ci-4152.0.0-d-03c8fd271e\" (UID: \"61611bb57d653ac3fccafe3fd856e728\") " pod="kube-system/kube-apiserver-ci-4152.0.0-d-03c8fd271e" Nov 13 08:27:57.325425 kubelet[2289]: I1113 08:27:57.325310 2289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61611bb57d653ac3fccafe3fd856e728-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.0.0-d-03c8fd271e\" (UID: \"61611bb57d653ac3fccafe3fd856e728\") " pod="kube-system/kube-apiserver-ci-4152.0.0-d-03c8fd271e" Nov 13 08:27:57.329795 kubelet[2289]: E1113 08:27:57.329708 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.128.242:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.0.0-d-03c8fd271e?timeout=10s\": dial tcp 209.38.128.242:6443: connect: connection refused" interval="400ms" Nov 13 08:27:57.426440 kubelet[2289]: I1113 08:27:57.426293 2289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1dcd674b00bce064adc934e1d963fc2-kubeconfig\") pod \"kube-controller-manager-ci-4152.0.0-d-03c8fd271e\" (UID: \"c1dcd674b00bce064adc934e1d963fc2\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-d-03c8fd271e" Nov 13 08:27:57.426440 kubelet[2289]: I1113 08:27:57.426450 2289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c1dcd674b00bce064adc934e1d963fc2-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.0.0-d-03c8fd271e\" (UID: \"c1dcd674b00bce064adc934e1d963fc2\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-d-03c8fd271e" Nov 13 08:27:57.427198 kubelet[2289]: I1113 08:27:57.426479 2289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1dcd674b00bce064adc934e1d963fc2-k8s-certs\") pod \"kube-controller-manager-ci-4152.0.0-d-03c8fd271e\" (UID: \"c1dcd674b00bce064adc934e1d963fc2\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-d-03c8fd271e" Nov 13 08:27:57.427198 kubelet[2289]: I1113 08:27:57.426511 2289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1dcd674b00bce064adc934e1d963fc2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.0.0-d-03c8fd271e\" (UID: \"c1dcd674b00bce064adc934e1d963fc2\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-d-03c8fd271e" Nov 13 08:27:57.427198 kubelet[2289]: I1113 08:27:57.426542 2289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6d4adf6aedc538b7914a0c6d8d4393f-kubeconfig\") pod \"kube-scheduler-ci-4152.0.0-d-03c8fd271e\" (UID: \"a6d4adf6aedc538b7914a0c6d8d4393f\") " pod="kube-system/kube-scheduler-ci-4152.0.0-d-03c8fd271e" Nov 13 08:27:57.432213 kubelet[2289]: I1113 08:27:57.432165 2289 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.0.0-d-03c8fd271e" Nov 13 08:27:57.433014 kubelet[2289]: E1113 08:27:57.432905 2289 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://209.38.128.242:6443/api/v1/nodes\": dial tcp 209.38.128.242:6443: connect: connection refused" node="ci-4152.0.0-d-03c8fd271e" Nov 13 08:27:57.593561 kubelet[2289]: E1113 08:27:57.593377 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:27:57.594701 containerd[1479]: time="2024-11-13T08:27:57.594254823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.0.0-d-03c8fd271e,Uid:61611bb57d653ac3fccafe3fd856e728,Namespace:kube-system,Attempt:0,}" Nov 13 08:27:57.611848 kubelet[2289]: E1113 08:27:57.611786 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:27:57.613162 containerd[1479]: time="2024-11-13T08:27:57.613080911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.0.0-d-03c8fd271e,Uid:c1dcd674b00bce064adc934e1d963fc2,Namespace:kube-system,Attempt:0,}" Nov 13 08:27:57.620143 kubelet[2289]: E1113 08:27:57.620044 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:27:57.621170 containerd[1479]: time="2024-11-13T08:27:57.620720682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.0.0-d-03c8fd271e,Uid:a6d4adf6aedc538b7914a0c6d8d4393f,Namespace:kube-system,Attempt:0,}" Nov 13 08:27:57.731288 kubelet[2289]: E1113 08:27:57.731182 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.128.242:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.0.0-d-03c8fd271e?timeout=10s\": dial tcp 209.38.128.242:6443: connect: connection refused" interval="800ms" Nov 13 08:27:57.835038 kubelet[2289]: I1113 08:27:57.834980 2289 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.0.0-d-03c8fd271e" Nov 13 08:27:57.835552 kubelet[2289]: E1113 08:27:57.835506 2289 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://209.38.128.242:6443/api/v1/nodes\": dial tcp 209.38.128.242:6443: connect: connection refused" node="ci-4152.0.0-d-03c8fd271e" Nov 13 08:27:58.100345 kubelet[2289]: W1113 08:27:58.100121 2289 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://209.38.128.242:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.128.242:6443: connect: connection refused Nov 13 08:27:58.100345 kubelet[2289]: E1113 08:27:58.100197 2289 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://209.38.128.242:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.128.242:6443: connect: connection refused Nov 13 08:27:58.349520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2967579292.mount: Deactivated successfully. Nov 13 08:27:58.353546 kubelet[2289]: W1113 08:27:58.353144 2289 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://209.38.128.242:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.128.242:6443: connect: connection refused Nov 13 08:27:58.353546 kubelet[2289]: E1113 08:27:58.353250 2289 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://209.38.128.242:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.128.242:6443: connect: connection refused Nov 13 08:27:58.363339 containerd[1479]: time="2024-11-13T08:27:58.362984093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 08:27:58.366001 containerd[1479]: time="2024-11-13T08:27:58.365860305Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 08:27:58.367836 containerd[1479]: time="2024-11-13T08:27:58.367694218Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 13 08:27:58.368725 containerd[1479]: time="2024-11-13T08:27:58.368653589Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 13 08:27:58.371145 containerd[1479]: time="2024-11-13T08:27:58.370876818Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 08:27:58.372205 containerd[1479]: time="2024-11-13T08:27:58.372088107Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 13 08:27:58.374853 containerd[1479]: time="2024-11-13T08:27:58.374737191Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 08:27:58.376676 containerd[1479]: time="2024-11-13T08:27:58.376566166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 08:27:58.377876 containerd[1479]: time="2024-11-13T08:27:58.377623027Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 764.406083ms" Nov 13 08:27:58.380770 containerd[1479]: time="2024-11-13T08:27:58.380677483Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 786.279096ms" Nov 13 08:27:58.381970 containerd[1479]: time="2024-11-13T08:27:58.381605768Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 760.769354ms" Nov 13 08:27:58.533023 kubelet[2289]: E1113 08:27:58.532901 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.128.242:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.0.0-d-03c8fd271e?timeout=10s\": dial tcp 209.38.128.242:6443: connect: connection refused" interval="1.6s" Nov 13 08:27:58.572884 kubelet[2289]: W1113 08:27:58.572767 2289 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://209.38.128.242:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.0.0-d-03c8fd271e&limit=500&resourceVersion=0": dial tcp 209.38.128.242:6443: connect: connection refused Nov 13 08:27:58.572884 kubelet[2289]: E1113 08:27:58.572849 2289 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://209.38.128.242:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.0.0-d-03c8fd271e&limit=500&resourceVersion=0": dial tcp 209.38.128.242:6443: connect: connection refused Nov 13 08:27:58.604801 containerd[1479]: time="2024-11-13T08:27:58.600373842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:27:58.604801 containerd[1479]: time="2024-11-13T08:27:58.603821206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:27:58.604801 containerd[1479]: time="2024-11-13T08:27:58.603853067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:27:58.604801 containerd[1479]: time="2024-11-13T08:27:58.604043043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:27:58.607097 kubelet[2289]: W1113 08:27:58.606944 2289 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://209.38.128.242:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 209.38.128.242:6443: connect: connection refused Nov 13 08:27:58.607097 kubelet[2289]: E1113 08:27:58.607049 2289 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://209.38.128.242:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 209.38.128.242:6443: connect: connection refused Nov 13 08:27:58.610179 containerd[1479]: time="2024-11-13T08:27:58.609902300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:27:58.610179 containerd[1479]: time="2024-11-13T08:27:58.610066686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:27:58.611169 containerd[1479]: time="2024-11-13T08:27:58.611057482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:27:58.612012 containerd[1479]: time="2024-11-13T08:27:58.611847173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:27:58.612247 containerd[1479]: time="2024-11-13T08:27:58.611977056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:27:58.612412 containerd[1479]: time="2024-11-13T08:27:58.612360412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:27:58.614103 containerd[1479]: time="2024-11-13T08:27:58.613871256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:27:58.614656 containerd[1479]: time="2024-11-13T08:27:58.614473268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:27:58.640231 kubelet[2289]: I1113 08:27:58.640180 2289 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.0.0-d-03c8fd271e" Nov 13 08:27:58.645710 kubelet[2289]: E1113 08:27:58.644221 2289 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://209.38.128.242:6443/api/v1/nodes\": dial tcp 209.38.128.242:6443: connect: connection refused" node="ci-4152.0.0-d-03c8fd271e" Nov 13 08:27:58.649331 systemd[1]: Started cri-containerd-13066f8de4dfa7cb06fee38e7395222d501fafbc32fca1d581bc82ac00fd590a.scope - libcontainer container 13066f8de4dfa7cb06fee38e7395222d501fafbc32fca1d581bc82ac00fd590a. Nov 13 08:27:58.669643 systemd[1]: Started cri-containerd-9140280210766b1c58a5d1f50e81bb3a4516acc4bb8901288fe374d0ab23b34c.scope - libcontainer container 9140280210766b1c58a5d1f50e81bb3a4516acc4bb8901288fe374d0ab23b34c. Nov 13 08:27:58.681472 systemd[1]: Started cri-containerd-a0d9272cea1ff57ae5010442941787383900041ef1c9a7c8450e24e3d5b93fe1.scope - libcontainer container a0d9272cea1ff57ae5010442941787383900041ef1c9a7c8450e24e3d5b93fe1. Nov 13 08:27:58.781797 containerd[1479]: time="2024-11-13T08:27:58.781695208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.0.0-d-03c8fd271e,Uid:c1dcd674b00bce064adc934e1d963fc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"13066f8de4dfa7cb06fee38e7395222d501fafbc32fca1d581bc82ac00fd590a\"" Nov 13 08:27:58.785630 kubelet[2289]: E1113 08:27:58.785160 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:27:58.791205 containerd[1479]: time="2024-11-13T08:27:58.791135720Z" level=info msg="CreateContainer within sandbox \"13066f8de4dfa7cb06fee38e7395222d501fafbc32fca1d581bc82ac00fd590a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 13 08:27:58.802571 containerd[1479]: time="2024-11-13T08:27:58.802517038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.0.0-d-03c8fd271e,Uid:61611bb57d653ac3fccafe3fd856e728,Namespace:kube-system,Attempt:0,} returns sandbox id \"9140280210766b1c58a5d1f50e81bb3a4516acc4bb8901288fe374d0ab23b34c\"" Nov 13 08:27:58.804315 kubelet[2289]: E1113 08:27:58.804256 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:27:58.811304 containerd[1479]: time="2024-11-13T08:27:58.811211191Z" level=info msg="CreateContainer within sandbox \"9140280210766b1c58a5d1f50e81bb3a4516acc4bb8901288fe374d0ab23b34c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 13 08:27:58.824409 containerd[1479]: time="2024-11-13T08:27:58.824176296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.0.0-d-03c8fd271e,Uid:a6d4adf6aedc538b7914a0c6d8d4393f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0d9272cea1ff57ae5010442941787383900041ef1c9a7c8450e24e3d5b93fe1\"" Nov 13 08:27:58.827880 kubelet[2289]: E1113 08:27:58.827675 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:27:58.833629 containerd[1479]: time="2024-11-13T08:27:58.832550580Z" level=info msg="CreateContainer within sandbox \"a0d9272cea1ff57ae5010442941787383900041ef1c9a7c8450e24e3d5b93fe1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 13 08:27:58.844678 containerd[1479]: time="2024-11-13T08:27:58.844579821Z" level=info msg="CreateContainer within sandbox \"13066f8de4dfa7cb06fee38e7395222d501fafbc32fca1d581bc82ac00fd590a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7bd5c6153615042df2c307cab7d52d61d479b737dd61ce986f1d5dba0eba5a2f\"" Nov 13 08:27:58.846388 containerd[1479]: time="2024-11-13T08:27:58.846322214Z" level=info msg="StartContainer for \"7bd5c6153615042df2c307cab7d52d61d479b737dd61ce986f1d5dba0eba5a2f\"" Nov 13 08:27:58.866125 containerd[1479]: time="2024-11-13T08:27:58.865169262Z" level=info msg="CreateContainer within sandbox \"9140280210766b1c58a5d1f50e81bb3a4516acc4bb8901288fe374d0ab23b34c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ff24802df1da1e3996b5e3281ba1a58913ed535428a80d5bd9d7bc05ae168ee4\"" Nov 13 08:27:58.868998 containerd[1479]: time="2024-11-13T08:27:58.867882058Z" level=info msg="StartContainer for \"ff24802df1da1e3996b5e3281ba1a58913ed535428a80d5bd9d7bc05ae168ee4\"" Nov 13 08:27:58.880080 containerd[1479]: time="2024-11-13T08:27:58.879880680Z" level=info msg="CreateContainer within sandbox \"a0d9272cea1ff57ae5010442941787383900041ef1c9a7c8450e24e3d5b93fe1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4cd4866ee1d8a9b426d91d0f208f70b9dc0e21968fda3a760a4826f5cedc7b58\"" Nov 13 08:27:58.881028 containerd[1479]: time="2024-11-13T08:27:58.880896029Z" level=info msg="StartContainer for \"4cd4866ee1d8a9b426d91d0f208f70b9dc0e21968fda3a760a4826f5cedc7b58\"" Nov 13 08:27:58.902204 systemd[1]: Started cri-containerd-7bd5c6153615042df2c307cab7d52d61d479b737dd61ce986f1d5dba0eba5a2f.scope - libcontainer container 7bd5c6153615042df2c307cab7d52d61d479b737dd61ce986f1d5dba0eba5a2f. Nov 13 08:27:58.946752 systemd[1]: Started cri-containerd-ff24802df1da1e3996b5e3281ba1a58913ed535428a80d5bd9d7bc05ae168ee4.scope - libcontainer container ff24802df1da1e3996b5e3281ba1a58913ed535428a80d5bd9d7bc05ae168ee4. Nov 13 08:27:58.977260 systemd[1]: Started cri-containerd-4cd4866ee1d8a9b426d91d0f208f70b9dc0e21968fda3a760a4826f5cedc7b58.scope - libcontainer container 4cd4866ee1d8a9b426d91d0f208f70b9dc0e21968fda3a760a4826f5cedc7b58. Nov 13 08:27:59.058553 containerd[1479]: time="2024-11-13T08:27:59.058429082Z" level=info msg="StartContainer for \"7bd5c6153615042df2c307cab7d52d61d479b737dd61ce986f1d5dba0eba5a2f\" returns successfully" Nov 13 08:27:59.062026 containerd[1479]: time="2024-11-13T08:27:59.061830666Z" level=info msg="StartContainer for \"ff24802df1da1e3996b5e3281ba1a58913ed535428a80d5bd9d7bc05ae168ee4\" returns successfully" Nov 13 08:27:59.096709 containerd[1479]: time="2024-11-13T08:27:59.096644853Z" level=info msg="StartContainer for \"4cd4866ee1d8a9b426d91d0f208f70b9dc0e21968fda3a760a4826f5cedc7b58\" returns successfully" Nov 13 08:27:59.192680 kubelet[2289]: E1113 08:27:59.192431 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:27:59.194424 kubelet[2289]: E1113 08:27:59.193646 2289 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://209.38.128.242:6443/api/v1/namespaces/default/events\": dial tcp 209.38.128.242:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.0.0-d-03c8fd271e.180779c6f124456c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.0.0-d-03c8fd271e,UID:ci-4152.0.0-d-03c8fd271e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.0.0-d-03c8fd271e,},FirstTimestamp:2024-11-13 08:27:57.103416684 +0000 UTC m=+0.797726755,LastTimestamp:2024-11-13 08:27:57.103416684 +0000 UTC m=+0.797726755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.0.0-d-03c8fd271e,}" Nov 13 08:27:59.196946 kubelet[2289]: E1113 08:27:59.196742 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:27:59.203015 kubelet[2289]: E1113 08:27:59.202964 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:27:59.269114 kubelet[2289]: E1113 08:27:59.268897 2289 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://209.38.128.242:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 209.38.128.242:6443: connect: connection refused Nov 13 08:28:00.206480 kubelet[2289]: E1113 08:28:00.206407 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:00.249225 kubelet[2289]: I1113 08:28:00.248500 2289 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.0.0-d-03c8fd271e" Nov 13 08:28:01.375770 kubelet[2289]: I1113 08:28:01.375649 2289 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.0.0-d-03c8fd271e" Nov 13 08:28:01.438600 kubelet[2289]: E1113 08:28:01.438489 2289 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.0.0-d-03c8fd271e\" not found" Nov 13 08:28:01.507184 kubelet[2289]: E1113 08:28:01.507131 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Nov 13 08:28:01.539617 kubelet[2289]: E1113 08:28:01.539542 2289 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.0.0-d-03c8fd271e\" not found" Nov 13 08:28:01.632699 update_engine[1453]: I20241113 08:28:01.632476 1453 update_attempter.cc:509] Updating boot flags... Nov 13 08:28:01.690062 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2568) Nov 13 08:28:01.812994 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2570) Nov 13 08:28:01.899084 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2570) Nov 13 08:28:02.101080 kubelet[2289]: I1113 08:28:02.101007 2289 apiserver.go:52] "Watching apiserver" Nov 13 08:28:02.125623 kubelet[2289]: I1113 08:28:02.125529 2289 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Nov 13 08:28:04.456630 kubelet[2289]: W1113 08:28:04.455331 2289 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 08:28:04.456630 kubelet[2289]: E1113 08:28:04.456005 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:04.752455 systemd[1]: Reloading requested from client PID 2577 ('systemctl') (unit session-9.scope)... Nov 13 08:28:04.752485 systemd[1]: Reloading... Nov 13 08:28:04.910962 zram_generator::config[2622]: No configuration found. Nov 13 08:28:05.101709 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 08:28:05.218256 kubelet[2289]: E1113 08:28:05.218121 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:05.268650 systemd[1]: Reloading finished in 515 ms. Nov 13 08:28:05.329256 kubelet[2289]: I1113 08:28:05.329203 2289 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 13 08:28:05.329956 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:28:05.346710 systemd[1]: kubelet.service: Deactivated successfully. Nov 13 08:28:05.347231 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:28:05.347333 systemd[1]: kubelet.service: Consumed 1.395s CPU time, 112.2M memory peak, 0B memory swap peak. Nov 13 08:28:05.356627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:28:05.729318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:28:05.735176 (kubelet)[2667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 13 08:28:05.833824 kubelet[2667]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 08:28:05.833824 kubelet[2667]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 13 08:28:05.833824 kubelet[2667]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 08:28:05.834408 kubelet[2667]: I1113 08:28:05.833888 2667 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 13 08:28:05.846292 kubelet[2667]: I1113 08:28:05.846214 2667 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Nov 13 08:28:05.846292 kubelet[2667]: I1113 08:28:05.846257 2667 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 13 08:28:05.846721 kubelet[2667]: I1113 08:28:05.846565 2667 server.go:927] "Client rotation is on, will bootstrap in background" Nov 13 08:28:05.849552 kubelet[2667]: I1113 08:28:05.849028 2667 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 13 08:28:05.852585 kubelet[2667]: I1113 08:28:05.852136 2667 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 13 08:28:05.878713 kubelet[2667]: I1113 08:28:05.878646 2667 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 13 08:28:05.879567 kubelet[2667]: I1113 08:28:05.879493 2667 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 13 08:28:05.880333 kubelet[2667]: I1113 08:28:05.879564 2667 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.0.0-d-03c8fd271e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 13 08:28:05.880333 kubelet[2667]: I1113 08:28:05.879900 2667 topology_manager.go:138] "Creating topology manager with none policy" Nov 13 08:28:05.880333 kubelet[2667]: I1113 08:28:05.879939 2667 container_manager_linux.go:301] "Creating device plugin manager" Nov 13 08:28:05.880333 kubelet[2667]: I1113 08:28:05.880007 2667 state_mem.go:36] "Initialized new in-memory state store" Nov 13 08:28:05.880333 kubelet[2667]: I1113 08:28:05.880183 2667 kubelet.go:400] "Attempting to sync node with API server" Nov 13 08:28:05.880684 kubelet[2667]: I1113 08:28:05.880203 2667 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 13 08:28:05.881578 kubelet[2667]: I1113 08:28:05.881540 2667 kubelet.go:312] "Adding apiserver pod source" Nov 13 08:28:05.881678 kubelet[2667]: I1113 08:28:05.881601 2667 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 13 08:28:05.887966 kubelet[2667]: I1113 08:28:05.886012 2667 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 13 08:28:05.887966 kubelet[2667]: I1113 08:28:05.886435 2667 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 13 08:28:05.887966 kubelet[2667]: I1113 08:28:05.887362 2667 server.go:1264] "Started kubelet" Nov 13 08:28:05.901964 kubelet[2667]: I1113 08:28:05.901383 2667 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 13 08:28:05.911831 kubelet[2667]: I1113 08:28:05.910384 2667 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 13 08:28:05.929589 kubelet[2667]: I1113 08:28:05.925177 2667 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 13 08:28:05.937971 sudo[2682]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 13 08:28:05.938808 kubelet[2667]: I1113 08:28:05.938555 2667 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 13 08:28:05.938808 kubelet[2667]: I1113 08:28:05.938170 2667 server.go:455] "Adding debug handlers to kubelet server" Nov 13 08:28:05.939598 sudo[2682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 13 08:28:05.941984 kubelet[2667]: I1113 08:28:05.941501 2667 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 13 08:28:05.943373 kubelet[2667]: I1113 08:28:05.928598 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 13 08:28:05.945101 kubelet[2667]: I1113 08:28:05.944736 2667 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 13 08:28:05.945944 kubelet[2667]: I1113 08:28:05.945836 2667 reconciler.go:26] "Reconciler: start to sync state" Nov 13 08:28:05.954211 kubelet[2667]: I1113 08:28:05.954141 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 13 08:28:05.954379 kubelet[2667]: I1113 08:28:05.954340 2667 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 13 08:28:05.954379 kubelet[2667]: I1113 08:28:05.954377 2667 kubelet.go:2337] "Starting kubelet main sync loop" Nov 13 08:28:05.954501 kubelet[2667]: E1113 08:28:05.954462 2667 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 13 08:28:05.967124 kubelet[2667]: I1113 08:28:05.966761 2667 factory.go:221] Registration of the systemd container factory successfully Nov 13 08:28:05.967538 kubelet[2667]: I1113 08:28:05.967500 2667 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 13 08:28:05.974666 kubelet[2667]: E1113 08:28:05.974629 2667 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 13 08:28:05.982114 kubelet[2667]: I1113 08:28:05.981964 2667 factory.go:221] Registration of the containerd container factory successfully Nov 13 08:28:06.046826 kubelet[2667]: I1113 08:28:06.046182 2667 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.0.0-d-03c8fd271e" Nov 13 08:28:06.055058 kubelet[2667]: E1113 08:28:06.055005 2667 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 13 08:28:06.081357 kubelet[2667]: I1113 08:28:06.081265 2667 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152.0.0-d-03c8fd271e" Nov 13 08:28:06.081757 kubelet[2667]: I1113 08:28:06.081731 2667 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.0.0-d-03c8fd271e" Nov 13 08:28:06.133954 kubelet[2667]: I1113 08:28:06.133891 2667 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 13 08:28:06.133954 kubelet[2667]: I1113 08:28:06.133939 2667 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 13 08:28:06.133954 kubelet[2667]: I1113 08:28:06.133980 2667 state_mem.go:36] "Initialized new in-memory state store" Nov 13 08:28:06.134261 kubelet[2667]: I1113 08:28:06.134228 2667 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 13 08:28:06.134307 kubelet[2667]: I1113 08:28:06.134246 2667 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 13 08:28:06.134307 kubelet[2667]: I1113 08:28:06.134276 2667 policy_none.go:49] "None policy: Start" Nov 13 08:28:06.138936 kubelet[2667]: I1113 08:28:06.138382 2667 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 13 08:28:06.138936 kubelet[2667]: I1113 08:28:06.138467 2667 state_mem.go:35] "Initializing new in-memory state store" Nov 13 08:28:06.138936 kubelet[2667]: I1113 08:28:06.138782 2667 state_mem.go:75] "Updated machine memory state" Nov 13 08:28:06.151310 kubelet[2667]: I1113 08:28:06.151265 2667 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 13 08:28:06.151673 kubelet[2667]: I1113 08:28:06.151610 2667 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 13 08:28:06.162951 kubelet[2667]: I1113 08:28:06.161497 2667 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 13 08:28:06.256455 kubelet[2667]: I1113 08:28:06.256092 2667 topology_manager.go:215] "Topology Admit Handler" podUID="c1dcd674b00bce064adc934e1d963fc2" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.0.0-d-03c8fd271e" Nov 13 08:28:06.256455 kubelet[2667]: I1113 08:28:06.256288 2667 topology_manager.go:215] "Topology Admit Handler" podUID="a6d4adf6aedc538b7914a0c6d8d4393f" podNamespace="kube-system" podName="kube-scheduler-ci-4152.0.0-d-03c8fd271e" Nov 13 08:28:06.256455 kubelet[2667]: I1113 08:28:06.256393 2667 topology_manager.go:215] "Topology Admit Handler" podUID="61611bb57d653ac3fccafe3fd856e728" podNamespace="kube-system" podName="kube-apiserver-ci-4152.0.0-d-03c8fd271e" Nov 13 08:28:06.270067 kubelet[2667]: W1113 08:28:06.269980 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 08:28:06.287075 kubelet[2667]: W1113 08:28:06.286509 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 08:28:06.287075 kubelet[2667]: W1113 08:28:06.286803 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 08:28:06.287075 kubelet[2667]: E1113 08:28:06.286870 2667 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4152.0.0-d-03c8fd271e\" already exists" pod="kube-system/kube-scheduler-ci-4152.0.0-d-03c8fd271e" Nov 13 08:28:06.349942 kubelet[2667]: I1113 08:28:06.349866 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1dcd674b00bce064adc934e1d963fc2-ca-certs\") pod \"kube-controller-manager-ci-4152.0.0-d-03c8fd271e\" (UID: \"c1dcd674b00bce064adc934e1d963fc2\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-d-03c8fd271e" Nov 13 08:28:06.351000 kubelet[2667]: I1113 08:28:06.350256 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1dcd674b00bce064adc934e1d963fc2-k8s-certs\") pod \"kube-controller-manager-ci-4152.0.0-d-03c8fd271e\" (UID: \"c1dcd674b00bce064adc934e1d963fc2\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-d-03c8fd271e" Nov 13 08:28:06.351000 kubelet[2667]: I1113 08:28:06.350336 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1dcd674b00bce064adc934e1d963fc2-kubeconfig\") pod \"kube-controller-manager-ci-4152.0.0-d-03c8fd271e\" (UID: \"c1dcd674b00bce064adc934e1d963fc2\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-d-03c8fd271e" Nov 13 08:28:06.351000 kubelet[2667]: I1113 08:28:06.350391 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1dcd674b00bce064adc934e1d963fc2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.0.0-d-03c8fd271e\" (UID: \"c1dcd674b00bce064adc934e1d963fc2\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-d-03c8fd271e" Nov 13 08:28:06.351000 kubelet[2667]: I1113 08:28:06.350422 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61611bb57d653ac3fccafe3fd856e728-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.0.0-d-03c8fd271e\" (UID: \"61611bb57d653ac3fccafe3fd856e728\") " pod="kube-system/kube-apiserver-ci-4152.0.0-d-03c8fd271e" Nov 13 08:28:06.351000 kubelet[2667]: I1113 08:28:06.350464 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c1dcd674b00bce064adc934e1d963fc2-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.0.0-d-03c8fd271e\" (UID: \"c1dcd674b00bce064adc934e1d963fc2\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-d-03c8fd271e" Nov 13 08:28:06.351408 kubelet[2667]: I1113 08:28:06.350496 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6d4adf6aedc538b7914a0c6d8d4393f-kubeconfig\") pod \"kube-scheduler-ci-4152.0.0-d-03c8fd271e\" (UID: \"a6d4adf6aedc538b7914a0c6d8d4393f\") " pod="kube-system/kube-scheduler-ci-4152.0.0-d-03c8fd271e" Nov 13 08:28:06.351408 kubelet[2667]: I1113 08:28:06.350532 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61611bb57d653ac3fccafe3fd856e728-ca-certs\") pod \"kube-apiserver-ci-4152.0.0-d-03c8fd271e\" (UID: \"61611bb57d653ac3fccafe3fd856e728\") " pod="kube-system/kube-apiserver-ci-4152.0.0-d-03c8fd271e" Nov 13 08:28:06.351408 kubelet[2667]: I1113 08:28:06.350571 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61611bb57d653ac3fccafe3fd856e728-k8s-certs\") pod \"kube-apiserver-ci-4152.0.0-d-03c8fd271e\" (UID: \"61611bb57d653ac3fccafe3fd856e728\") " pod="kube-system/kube-apiserver-ci-4152.0.0-d-03c8fd271e" Nov 13 08:28:06.574027 kubelet[2667]: E1113 08:28:06.573513 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:06.588780 kubelet[2667]: E1113 08:28:06.587407 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:06.588780 kubelet[2667]: E1113 08:28:06.588314 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:06.852264 sudo[2682]: pam_unix(sudo:session): session closed for user root Nov 13 08:28:06.884935 kubelet[2667]: I1113 08:28:06.884531 2667 apiserver.go:52] "Watching apiserver" Nov 13 08:28:06.947146 kubelet[2667]: I1113 08:28:06.945954 2667 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Nov 13 08:28:07.039747 kubelet[2667]: E1113 08:28:07.039684 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:07.040997 kubelet[2667]: E1113 08:28:07.040961 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:07.063307 kubelet[2667]: W1113 08:28:07.063245 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 08:28:07.066970 kubelet[2667]: E1113 08:28:07.063930 2667 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152.0.0-d-03c8fd271e\" already exists" pod="kube-system/kube-apiserver-ci-4152.0.0-d-03c8fd271e" Nov 13 08:28:07.066970 kubelet[2667]: E1113 08:28:07.064506 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:07.117622 kubelet[2667]: I1113 08:28:07.117396 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152.0.0-d-03c8fd271e" podStartSLOduration=3.117372625 podStartE2EDuration="3.117372625s" podCreationTimestamp="2024-11-13 08:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:28:07.100686599 +0000 UTC m=+1.358170300" watchObservedRunningTime="2024-11-13 08:28:07.117372625 +0000 UTC m=+1.374856299" Nov 13 08:28:07.137397 kubelet[2667]: I1113 08:28:07.136113 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152.0.0-d-03c8fd271e" podStartSLOduration=1.136082934 podStartE2EDuration="1.136082934s" podCreationTimestamp="2024-11-13 08:28:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:28:07.117675351 +0000 UTC m=+1.375159035" watchObservedRunningTime="2024-11-13 08:28:07.136082934 +0000 UTC m=+1.393566637" Nov 13 08:28:08.042238 kubelet[2667]: E1113 08:28:08.042075 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:08.748242 sudo[1673]: pam_unix(sudo:session): session closed for user root Nov 13 08:28:08.751750 sshd[1672]: Connection closed by 139.178.89.65 port 46094 Nov 13 08:28:08.753036 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Nov 13 08:28:08.758887 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Nov 13 08:28:08.760185 systemd[1]: sshd@8-209.38.128.242:22-139.178.89.65:46094.service: Deactivated successfully. Nov 13 08:28:08.766555 systemd[1]: session-9.scope: Deactivated successfully. Nov 13 08:28:08.767182 systemd[1]: session-9.scope: Consumed 7.965s CPU time, 185.6M memory peak, 0B memory swap peak. Nov 13 08:28:08.770288 systemd-logind[1452]: Removed session 9. Nov 13 08:28:10.314865 kubelet[2667]: E1113 08:28:10.314758 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:10.350078 kubelet[2667]: I1113 08:28:10.349177 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152.0.0-d-03c8fd271e" podStartSLOduration=4.349147511 podStartE2EDuration="4.349147511s" podCreationTimestamp="2024-11-13 08:28:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:28:07.137006445 +0000 UTC m=+1.394490146" watchObservedRunningTime="2024-11-13 08:28:10.349147511 +0000 UTC m=+4.606631202" Nov 13 08:28:11.049655 kubelet[2667]: E1113 08:28:11.049070 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:13.325896 kubelet[2667]: E1113 08:28:13.325700 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:14.054485 kubelet[2667]: E1113 08:28:14.054393 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:15.962725 kubelet[2667]: E1113 08:28:15.961335 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:16.063127 kubelet[2667]: E1113 08:28:16.063044 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:18.764659 kubelet[2667]: I1113 08:28:18.764534 2667 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 13 08:28:18.765412 containerd[1479]: time="2024-11-13T08:28:18.765329738Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 13 08:28:18.765956 kubelet[2667]: I1113 08:28:18.765901 2667 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 13 08:28:19.481176 kubelet[2667]: I1113 08:28:19.480670 2667 topology_manager.go:215] "Topology Admit Handler" podUID="5201005f-fe13-46ff-a1c1-d83e37fdae56" podNamespace="kube-system" podName="kube-proxy-s9n82" Nov 13 08:28:19.482713 kubelet[2667]: I1113 08:28:19.482664 2667 topology_manager.go:215] "Topology Admit Handler" podUID="15ba1b0d-69b7-450f-9421-04bef59857dc" podNamespace="kube-system" podName="cilium-7lzv9" Nov 13 08:28:19.497125 systemd[1]: Created slice kubepods-besteffort-pod5201005f_fe13_46ff_a1c1_d83e37fdae56.slice - libcontainer container kubepods-besteffort-pod5201005f_fe13_46ff_a1c1_d83e37fdae56.slice. Nov 13 08:28:19.525128 systemd[1]: Created slice kubepods-burstable-pod15ba1b0d_69b7_450f_9421_04bef59857dc.slice - libcontainer container kubepods-burstable-pod15ba1b0d_69b7_450f_9421_04bef59857dc.slice. Nov 13 08:28:19.546500 kubelet[2667]: I1113 08:28:19.546439 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15ba1b0d-69b7-450f-9421-04bef59857dc-clustermesh-secrets\") pod \"cilium-7lzv9\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " pod="kube-system/cilium-7lzv9" Nov 13 08:28:19.546500 kubelet[2667]: I1113 08:28:19.546497 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-host-proc-sys-net\") pod \"cilium-7lzv9\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " pod="kube-system/cilium-7lzv9" Nov 13 08:28:19.546720 kubelet[2667]: I1113 08:28:19.546533 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5201005f-fe13-46ff-a1c1-d83e37fdae56-lib-modules\") pod \"kube-proxy-s9n82\" (UID: \"5201005f-fe13-46ff-a1c1-d83e37fdae56\") " pod="kube-system/kube-proxy-s9n82" Nov 13 08:28:19.546720 kubelet[2667]: I1113 08:28:19.546575 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-cilium-cgroup\") pod \"cilium-7lzv9\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " pod="kube-system/cilium-7lzv9" Nov 13 08:28:19.546720 kubelet[2667]: I1113 08:28:19.546601 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-cni-path\") pod \"cilium-7lzv9\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " pod="kube-system/cilium-7lzv9" Nov 13 08:28:19.546720 kubelet[2667]: I1113 08:28:19.546630 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5201005f-fe13-46ff-a1c1-d83e37fdae56-kube-proxy\") pod \"kube-proxy-s9n82\" (UID: \"5201005f-fe13-46ff-a1c1-d83e37fdae56\") " pod="kube-system/kube-proxy-s9n82" Nov 13 08:28:19.546720 kubelet[2667]: I1113 08:28:19.546655 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-bpf-maps\") pod \"cilium-7lzv9\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " pod="kube-system/cilium-7lzv9" Nov 13 08:28:19.546720 kubelet[2667]: I1113 08:28:19.546682 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15ba1b0d-69b7-450f-9421-04bef59857dc-cilium-config-path\") pod \"cilium-7lzv9\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " pod="kube-system/cilium-7lzv9" Nov 13 08:28:19.546869 kubelet[2667]: I1113 08:28:19.546711 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-lib-modules\") pod \"cilium-7lzv9\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " pod="kube-system/cilium-7lzv9" Nov 13 08:28:19.546869 kubelet[2667]: I1113 08:28:19.546767 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-xtables-lock\") pod \"cilium-7lzv9\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " pod="kube-system/cilium-7lzv9" Nov 13 08:28:19.546869 kubelet[2667]: I1113 08:28:19.546798 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-host-proc-sys-kernel\") pod \"cilium-7lzv9\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " pod="kube-system/cilium-7lzv9" Nov 13 08:28:19.546869 kubelet[2667]: I1113 08:28:19.546828 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-cilium-run\") pod \"cilium-7lzv9\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " pod="kube-system/cilium-7lzv9" Nov 13 08:28:19.546869 kubelet[2667]: I1113 08:28:19.546856 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15ba1b0d-69b7-450f-9421-04bef59857dc-hubble-tls\") pod \"cilium-7lzv9\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " pod="kube-system/cilium-7lzv9" Nov 13 08:28:19.547082 kubelet[2667]: I1113 08:28:19.546881 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-hostproc\") pod \"cilium-7lzv9\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " pod="kube-system/cilium-7lzv9" Nov 13 08:28:19.548997 kubelet[2667]: I1113 08:28:19.547132 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-etc-cni-netd\") pod \"cilium-7lzv9\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " pod="kube-system/cilium-7lzv9" Nov 13 08:28:19.548997 kubelet[2667]: I1113 08:28:19.547234 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz47g\" (UniqueName: \"kubernetes.io/projected/5201005f-fe13-46ff-a1c1-d83e37fdae56-kube-api-access-zz47g\") pod \"kube-proxy-s9n82\" (UID: \"5201005f-fe13-46ff-a1c1-d83e37fdae56\") " pod="kube-system/kube-proxy-s9n82" Nov 13 08:28:19.548997 kubelet[2667]: I1113 08:28:19.547272 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5201005f-fe13-46ff-a1c1-d83e37fdae56-xtables-lock\") pod \"kube-proxy-s9n82\" (UID: \"5201005f-fe13-46ff-a1c1-d83e37fdae56\") " pod="kube-system/kube-proxy-s9n82" Nov 13 08:28:19.548997 kubelet[2667]: I1113 08:28:19.547336 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zwgw\" (UniqueName: \"kubernetes.io/projected/15ba1b0d-69b7-450f-9421-04bef59857dc-kube-api-access-2zwgw\") pod \"cilium-7lzv9\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " pod="kube-system/cilium-7lzv9" Nov 13 08:28:19.821098 kubelet[2667]: E1113 08:28:19.819626 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:19.826501 containerd[1479]: time="2024-11-13T08:28:19.825203744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s9n82,Uid:5201005f-fe13-46ff-a1c1-d83e37fdae56,Namespace:kube-system,Attempt:0,}" Nov 13 08:28:19.835273 kubelet[2667]: E1113 08:28:19.834653 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:19.837813 containerd[1479]: time="2024-11-13T08:28:19.837400624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7lzv9,Uid:15ba1b0d-69b7-450f-9421-04bef59857dc,Namespace:kube-system,Attempt:0,}" Nov 13 08:28:19.919273 kubelet[2667]: I1113 08:28:19.918508 2667 topology_manager.go:215] "Topology Admit Handler" podUID="19e53ac0-5b84-459e-bba2-e0a2e93677d4" podNamespace="kube-system" podName="cilium-operator-599987898-gxqdd" Nov 13 08:28:19.936141 systemd[1]: Created slice kubepods-besteffort-pod19e53ac0_5b84_459e_bba2_e0a2e93677d4.slice - libcontainer container kubepods-besteffort-pod19e53ac0_5b84_459e_bba2_e0a2e93677d4.slice. Nov 13 08:28:19.952604 kubelet[2667]: I1113 08:28:19.952388 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19e53ac0-5b84-459e-bba2-e0a2e93677d4-cilium-config-path\") pod \"cilium-operator-599987898-gxqdd\" (UID: \"19e53ac0-5b84-459e-bba2-e0a2e93677d4\") " pod="kube-system/cilium-operator-599987898-gxqdd" Nov 13 08:28:19.959057 kubelet[2667]: I1113 08:28:19.958057 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9jfd\" (UniqueName: \"kubernetes.io/projected/19e53ac0-5b84-459e-bba2-e0a2e93677d4-kube-api-access-m9jfd\") pod \"cilium-operator-599987898-gxqdd\" (UID: \"19e53ac0-5b84-459e-bba2-e0a2e93677d4\") " pod="kube-system/cilium-operator-599987898-gxqdd" Nov 13 08:28:19.975121 containerd[1479]: time="2024-11-13T08:28:19.974521702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:28:19.975121 containerd[1479]: time="2024-11-13T08:28:19.974672336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:28:19.975121 containerd[1479]: time="2024-11-13T08:28:19.974696947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:19.975121 containerd[1479]: time="2024-11-13T08:28:19.974841310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:19.992014 containerd[1479]: time="2024-11-13T08:28:19.991036189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:28:19.992508 containerd[1479]: time="2024-11-13T08:28:19.992031030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:28:19.992508 containerd[1479]: time="2024-11-13T08:28:19.992092483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:19.992508 containerd[1479]: time="2024-11-13T08:28:19.992374876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:20.019801 systemd[1]: Started cri-containerd-b45a9c284b15c95cfa8e76592f452dc7eb5b6715024d7c49efd3baea8cbaf7d9.scope - libcontainer container b45a9c284b15c95cfa8e76592f452dc7eb5b6715024d7c49efd3baea8cbaf7d9. Nov 13 08:28:20.027808 systemd[1]: Started cri-containerd-1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f.scope - libcontainer container 1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f. Nov 13 08:28:20.089350 containerd[1479]: time="2024-11-13T08:28:20.089215225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s9n82,Uid:5201005f-fe13-46ff-a1c1-d83e37fdae56,Namespace:kube-system,Attempt:0,} returns sandbox id \"b45a9c284b15c95cfa8e76592f452dc7eb5b6715024d7c49efd3baea8cbaf7d9\"" Nov 13 08:28:20.091470 kubelet[2667]: E1113 08:28:20.091416 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:20.102674 containerd[1479]: time="2024-11-13T08:28:20.102123054Z" level=info msg="CreateContainer within sandbox \"b45a9c284b15c95cfa8e76592f452dc7eb5b6715024d7c49efd3baea8cbaf7d9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 13 08:28:20.102966 containerd[1479]: time="2024-11-13T08:28:20.102851013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7lzv9,Uid:15ba1b0d-69b7-450f-9421-04bef59857dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f\"" Nov 13 08:28:20.106187 kubelet[2667]: E1113 08:28:20.105845 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:20.109043 containerd[1479]: time="2024-11-13T08:28:20.108582094Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 13 08:28:20.140216 containerd[1479]: time="2024-11-13T08:28:20.139977393Z" level=info msg="CreateContainer within sandbox \"b45a9c284b15c95cfa8e76592f452dc7eb5b6715024d7c49efd3baea8cbaf7d9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"87d81f518c3e653fbb53042cb0f476171b9491cf4826c333550eeb1605d3b94a\"" Nov 13 08:28:20.142381 containerd[1479]: time="2024-11-13T08:28:20.141300886Z" level=info msg="StartContainer for \"87d81f518c3e653fbb53042cb0f476171b9491cf4826c333550eeb1605d3b94a\"" Nov 13 08:28:20.186312 systemd[1]: Started cri-containerd-87d81f518c3e653fbb53042cb0f476171b9491cf4826c333550eeb1605d3b94a.scope - libcontainer container 87d81f518c3e653fbb53042cb0f476171b9491cf4826c333550eeb1605d3b94a. Nov 13 08:28:20.237332 containerd[1479]: time="2024-11-13T08:28:20.237242606Z" level=info msg="StartContainer for \"87d81f518c3e653fbb53042cb0f476171b9491cf4826c333550eeb1605d3b94a\" returns successfully" Nov 13 08:28:20.259698 kubelet[2667]: E1113 08:28:20.259573 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:20.262069 containerd[1479]: time="2024-11-13T08:28:20.261848603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gxqdd,Uid:19e53ac0-5b84-459e-bba2-e0a2e93677d4,Namespace:kube-system,Attempt:0,}" Nov 13 08:28:20.318584 containerd[1479]: time="2024-11-13T08:28:20.318260584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:28:20.320070 containerd[1479]: time="2024-11-13T08:28:20.319957903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:28:20.320070 containerd[1479]: time="2024-11-13T08:28:20.320011769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:20.321077 containerd[1479]: time="2024-11-13T08:28:20.320950456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:20.359408 systemd[1]: Started cri-containerd-d883704de844e456c19638ce85d8a903ace61e9c5a7e1edf97792ad586714539.scope - libcontainer container d883704de844e456c19638ce85d8a903ace61e9c5a7e1edf97792ad586714539. Nov 13 08:28:20.485615 containerd[1479]: time="2024-11-13T08:28:20.485550705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gxqdd,Uid:19e53ac0-5b84-459e-bba2-e0a2e93677d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d883704de844e456c19638ce85d8a903ace61e9c5a7e1edf97792ad586714539\"" Nov 13 08:28:20.490111 kubelet[2667]: E1113 08:28:20.488233 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:21.081243 kubelet[2667]: E1113 08:28:21.081202 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:21.099001 kubelet[2667]: I1113 08:28:21.098813 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s9n82" podStartSLOduration=2.098792621 podStartE2EDuration="2.098792621s" podCreationTimestamp="2024-11-13 08:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:28:21.098299324 +0000 UTC m=+15.355783013" watchObservedRunningTime="2024-11-13 08:28:21.098792621 +0000 UTC m=+15.356276306" Nov 13 08:28:29.664357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount332137690.mount: Deactivated successfully. Nov 13 08:28:33.272797 containerd[1479]: time="2024-11-13T08:28:33.272716274Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:28:33.275777 containerd[1479]: time="2024-11-13T08:28:33.275577775Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735283" Nov 13 08:28:33.275777 containerd[1479]: time="2024-11-13T08:28:33.275710130Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:28:33.280361 containerd[1479]: time="2024-11-13T08:28:33.280280422Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.171641275s" Nov 13 08:28:33.280361 containerd[1479]: time="2024-11-13T08:28:33.280351772Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 13 08:28:33.284250 containerd[1479]: time="2024-11-13T08:28:33.283985548Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 13 08:28:33.287053 containerd[1479]: time="2024-11-13T08:28:33.286986947Z" level=info msg="CreateContainer within sandbox \"1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 13 08:28:33.374897 containerd[1479]: time="2024-11-13T08:28:33.374737651Z" level=info msg="CreateContainer within sandbox \"1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279\"" Nov 13 08:28:33.377979 containerd[1479]: time="2024-11-13T08:28:33.376706555Z" level=info msg="StartContainer for \"d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279\"" Nov 13 08:28:33.504383 systemd[1]: Started cri-containerd-d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279.scope - libcontainer container d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279. Nov 13 08:28:33.561085 containerd[1479]: time="2024-11-13T08:28:33.560173575Z" level=info msg="StartContainer for \"d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279\" returns successfully" Nov 13 08:28:33.577753 systemd[1]: cri-containerd-d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279.scope: Deactivated successfully. Nov 13 08:28:33.823113 containerd[1479]: time="2024-11-13T08:28:33.797193860Z" level=info msg="shim disconnected" id=d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279 namespace=k8s.io Nov 13 08:28:33.823113 containerd[1479]: time="2024-11-13T08:28:33.822543077Z" level=warning msg="cleaning up after shim disconnected" id=d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279 namespace=k8s.io Nov 13 08:28:33.823113 containerd[1479]: time="2024-11-13T08:28:33.822568666Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:28:34.120229 kubelet[2667]: E1113 08:28:34.117713 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:34.127426 containerd[1479]: time="2024-11-13T08:28:34.126996613Z" level=info msg="CreateContainer within sandbox \"1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 13 08:28:34.156212 containerd[1479]: time="2024-11-13T08:28:34.156132293Z" level=info msg="CreateContainer within sandbox \"1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba\"" Nov 13 08:28:34.159332 containerd[1479]: time="2024-11-13T08:28:34.157274220Z" level=info msg="StartContainer for \"c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba\"" Nov 13 08:28:34.218459 systemd[1]: Started cri-containerd-c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba.scope - libcontainer container c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba. Nov 13 08:28:34.273366 containerd[1479]: time="2024-11-13T08:28:34.273279975Z" level=info msg="StartContainer for \"c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba\" returns successfully" Nov 13 08:28:34.294149 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 13 08:28:34.294606 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 13 08:28:34.294721 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 13 08:28:34.305699 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 13 08:28:34.308544 systemd[1]: cri-containerd-c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba.scope: Deactivated successfully. Nov 13 08:28:34.349818 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 13 08:28:34.368523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279-rootfs.mount: Deactivated successfully. Nov 13 08:28:34.381072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba-rootfs.mount: Deactivated successfully. Nov 13 08:28:34.393968 containerd[1479]: time="2024-11-13T08:28:34.393377317Z" level=info msg="shim disconnected" id=c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba namespace=k8s.io Nov 13 08:28:34.393968 containerd[1479]: time="2024-11-13T08:28:34.393493043Z" level=warning msg="cleaning up after shim disconnected" id=c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba namespace=k8s.io Nov 13 08:28:34.393968 containerd[1479]: time="2024-11-13T08:28:34.393509034Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:28:35.123238 kubelet[2667]: E1113 08:28:35.123185 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:35.131490 containerd[1479]: time="2024-11-13T08:28:35.130684236Z" level=info msg="CreateContainer within sandbox \"1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 13 08:28:35.204428 containerd[1479]: time="2024-11-13T08:28:35.204273251Z" level=info msg="CreateContainer within sandbox \"1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3\"" Nov 13 08:28:35.207168 containerd[1479]: time="2024-11-13T08:28:35.205126711Z" level=info msg="StartContainer for \"ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3\"" Nov 13 08:28:35.259337 systemd[1]: Started cri-containerd-ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3.scope - libcontainer container ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3. Nov 13 08:28:35.320967 containerd[1479]: time="2024-11-13T08:28:35.320814461Z" level=info msg="StartContainer for \"ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3\" returns successfully" Nov 13 08:28:35.325959 systemd[1]: cri-containerd-ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3.scope: Deactivated successfully. Nov 13 08:28:35.381871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3-rootfs.mount: Deactivated successfully. Nov 13 08:28:35.392752 containerd[1479]: time="2024-11-13T08:28:35.392635707Z" level=info msg="shim disconnected" id=ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3 namespace=k8s.io Nov 13 08:28:35.392752 containerd[1479]: time="2024-11-13T08:28:35.392718890Z" level=warning msg="cleaning up after shim disconnected" id=ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3 namespace=k8s.io Nov 13 08:28:35.392752 containerd[1479]: time="2024-11-13T08:28:35.392729504Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:28:36.131146 kubelet[2667]: E1113 08:28:36.131067 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:36.138875 containerd[1479]: time="2024-11-13T08:28:36.138179818Z" level=info msg="CreateContainer within sandbox \"1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 13 08:28:36.168519 containerd[1479]: time="2024-11-13T08:28:36.168418702Z" level=info msg="CreateContainer within sandbox \"1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3\"" Nov 13 08:28:36.170682 containerd[1479]: time="2024-11-13T08:28:36.170071079Z" level=info msg="StartContainer for \"665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3\"" Nov 13 08:28:36.226425 systemd[1]: Started cri-containerd-665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3.scope - libcontainer container 665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3. Nov 13 08:28:36.272047 systemd[1]: cri-containerd-665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3.scope: Deactivated successfully. Nov 13 08:28:36.276926 containerd[1479]: time="2024-11-13T08:28:36.276714738Z" level=info msg="StartContainer for \"665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3\" returns successfully" Nov 13 08:28:36.316944 containerd[1479]: time="2024-11-13T08:28:36.316807733Z" level=info msg="shim disconnected" id=665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3 namespace=k8s.io Nov 13 08:28:36.317224 containerd[1479]: time="2024-11-13T08:28:36.316976078Z" level=warning msg="cleaning up after shim disconnected" id=665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3 namespace=k8s.io Nov 13 08:28:36.317224 containerd[1479]: time="2024-11-13T08:28:36.316994991Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:28:36.365151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3-rootfs.mount: Deactivated successfully. Nov 13 08:28:37.136678 kubelet[2667]: E1113 08:28:37.136410 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:37.146964 containerd[1479]: time="2024-11-13T08:28:37.146856402Z" level=info msg="CreateContainer within sandbox \"1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 13 08:28:37.180007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3170886467.mount: Deactivated successfully. Nov 13 08:28:37.187613 containerd[1479]: time="2024-11-13T08:28:37.187456254Z" level=info msg="CreateContainer within sandbox \"1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e\"" Nov 13 08:28:37.191857 containerd[1479]: time="2024-11-13T08:28:37.191594160Z" level=info msg="StartContainer for \"cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e\"" Nov 13 08:28:37.255713 systemd[1]: Started cri-containerd-cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e.scope - libcontainer container cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e. Nov 13 08:28:37.319639 containerd[1479]: time="2024-11-13T08:28:37.319404264Z" level=info msg="StartContainer for \"cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e\" returns successfully" Nov 13 08:28:37.623996 kubelet[2667]: I1113 08:28:37.621435 2667 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 13 08:28:37.672806 kubelet[2667]: I1113 08:28:37.672679 2667 topology_manager.go:215] "Topology Admit Handler" podUID="c70b9d7e-f879-4511-abe6-54ea38f711c0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sqqnc" Nov 13 08:28:37.689673 kubelet[2667]: I1113 08:28:37.688634 2667 topology_manager.go:215] "Topology Admit Handler" podUID="0b548c57-6584-40c9-864e-7a91b8d1d436" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qmlpn" Nov 13 08:28:37.702349 systemd[1]: Created slice kubepods-burstable-podc70b9d7e_f879_4511_abe6_54ea38f711c0.slice - libcontainer container kubepods-burstable-podc70b9d7e_f879_4511_abe6_54ea38f711c0.slice. Nov 13 08:28:37.719675 kubelet[2667]: I1113 08:28:37.718520 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpxj8\" (UniqueName: \"kubernetes.io/projected/0b548c57-6584-40c9-864e-7a91b8d1d436-kube-api-access-rpxj8\") pod \"coredns-7db6d8ff4d-qmlpn\" (UID: \"0b548c57-6584-40c9-864e-7a91b8d1d436\") " pod="kube-system/coredns-7db6d8ff4d-qmlpn" Nov 13 08:28:37.719675 kubelet[2667]: I1113 08:28:37.718597 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c70b9d7e-f879-4511-abe6-54ea38f711c0-config-volume\") pod \"coredns-7db6d8ff4d-sqqnc\" (UID: \"c70b9d7e-f879-4511-abe6-54ea38f711c0\") " pod="kube-system/coredns-7db6d8ff4d-sqqnc" Nov 13 08:28:37.719675 kubelet[2667]: I1113 08:28:37.718635 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh2vw\" (UniqueName: \"kubernetes.io/projected/c70b9d7e-f879-4511-abe6-54ea38f711c0-kube-api-access-mh2vw\") pod \"coredns-7db6d8ff4d-sqqnc\" (UID: \"c70b9d7e-f879-4511-abe6-54ea38f711c0\") " pod="kube-system/coredns-7db6d8ff4d-sqqnc" Nov 13 08:28:37.719675 kubelet[2667]: I1113 08:28:37.718676 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b548c57-6584-40c9-864e-7a91b8d1d436-config-volume\") pod \"coredns-7db6d8ff4d-qmlpn\" (UID: \"0b548c57-6584-40c9-864e-7a91b8d1d436\") " pod="kube-system/coredns-7db6d8ff4d-qmlpn" Nov 13 08:28:37.726544 systemd[1]: Created slice kubepods-burstable-pod0b548c57_6584_40c9_864e_7a91b8d1d436.slice - libcontainer container kubepods-burstable-pod0b548c57_6584_40c9_864e_7a91b8d1d436.slice. Nov 13 08:28:38.016899 kubelet[2667]: E1113 08:28:38.015113 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:38.025990 containerd[1479]: time="2024-11-13T08:28:38.025874688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sqqnc,Uid:c70b9d7e-f879-4511-abe6-54ea38f711c0,Namespace:kube-system,Attempt:0,}" Nov 13 08:28:38.031652 kubelet[2667]: E1113 08:28:38.031568 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:38.032966 containerd[1479]: time="2024-11-13T08:28:38.032682088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qmlpn,Uid:0b548c57-6584-40c9-864e-7a91b8d1d436,Namespace:kube-system,Attempt:0,}" Nov 13 08:28:38.245570 kubelet[2667]: E1113 08:28:38.244825 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:38.273620 kubelet[2667]: I1113 08:28:38.273112 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7lzv9" podStartSLOduration=6.098521875 podStartE2EDuration="19.273091711s" podCreationTimestamp="2024-11-13 08:28:19 +0000 UTC" firstStartedPulling="2024-11-13 08:28:20.107772023 +0000 UTC m=+14.365255696" lastFinishedPulling="2024-11-13 08:28:33.282341859 +0000 UTC m=+27.539825532" observedRunningTime="2024-11-13 08:28:38.271184911 +0000 UTC m=+32.528668596" watchObservedRunningTime="2024-11-13 08:28:38.273091711 +0000 UTC m=+32.530575418" Nov 13 08:28:38.575544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2418551096.mount: Deactivated successfully. Nov 13 08:28:39.250408 kubelet[2667]: E1113 08:28:39.250339 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:39.468212 containerd[1479]: time="2024-11-13T08:28:39.468076554Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:28:39.469691 containerd[1479]: time="2024-11-13T08:28:39.469613725Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907205" Nov 13 08:28:39.471267 containerd[1479]: time="2024-11-13T08:28:39.471192817Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:28:39.473899 containerd[1479]: time="2024-11-13T08:28:39.473725372Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.189679566s" Nov 13 08:28:39.473899 containerd[1479]: time="2024-11-13T08:28:39.473775530Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 13 08:28:39.478003 containerd[1479]: time="2024-11-13T08:28:39.477768444Z" level=info msg="CreateContainer within sandbox \"d883704de844e456c19638ce85d8a903ace61e9c5a7e1edf97792ad586714539\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 13 08:28:39.539789 containerd[1479]: time="2024-11-13T08:28:39.539137247Z" level=info msg="CreateContainer within sandbox \"d883704de844e456c19638ce85d8a903ace61e9c5a7e1edf97792ad586714539\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e\"" Nov 13 08:28:39.542088 containerd[1479]: time="2024-11-13T08:28:39.541060327Z" level=info msg="StartContainer for \"391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e\"" Nov 13 08:28:39.584386 systemd[1]: Started cri-containerd-391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e.scope - libcontainer container 391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e. Nov 13 08:28:39.625386 containerd[1479]: time="2024-11-13T08:28:39.625029083Z" level=info msg="StartContainer for \"391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e\" returns successfully" Nov 13 08:28:40.257851 kubelet[2667]: E1113 08:28:40.256511 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:40.257851 kubelet[2667]: E1113 08:28:40.257685 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:41.260626 kubelet[2667]: E1113 08:28:41.260460 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:41.946276 systemd-networkd[1375]: cilium_host: Link UP Nov 13 08:28:41.946881 systemd-networkd[1375]: cilium_net: Link UP Nov 13 08:28:41.946885 systemd-networkd[1375]: cilium_net: Gained carrier Nov 13 08:28:41.948345 systemd-networkd[1375]: cilium_host: Gained carrier Nov 13 08:28:41.954112 systemd-networkd[1375]: cilium_net: Gained IPv6LL Nov 13 08:28:42.189764 systemd-networkd[1375]: cilium_vxlan: Link UP Nov 13 08:28:42.189783 systemd-networkd[1375]: cilium_vxlan: Gained carrier Nov 13 08:28:42.908972 kernel: NET: Registered PF_ALG protocol family Nov 13 08:28:42.937552 systemd-networkd[1375]: cilium_host: Gained IPv6LL Nov 13 08:28:43.468457 systemd[1]: Started sshd@9-209.38.128.242:22-139.178.89.65:57562.service - OpenSSH per-connection server daemon (139.178.89.65:57562). Nov 13 08:28:43.612717 sshd[3642]: Accepted publickey for core from 139.178.89.65 port 57562 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:28:43.612811 sshd-session[3642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:28:43.623496 systemd-logind[1452]: New session 10 of user core. Nov 13 08:28:43.628250 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 13 08:28:44.025281 systemd-networkd[1375]: cilium_vxlan: Gained IPv6LL Nov 13 08:28:44.335222 systemd-networkd[1375]: lxc_health: Link UP Nov 13 08:28:44.336347 systemd-networkd[1375]: lxc_health: Gained carrier Nov 13 08:28:44.510378 sshd[3715]: Connection closed by 139.178.89.65 port 57562 Nov 13 08:28:44.511551 sshd-session[3642]: pam_unix(sshd:session): session closed for user core Nov 13 08:28:44.519583 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Nov 13 08:28:44.519888 systemd[1]: sshd@9-209.38.128.242:22-139.178.89.65:57562.service: Deactivated successfully. Nov 13 08:28:44.526120 systemd[1]: session-10.scope: Deactivated successfully. Nov 13 08:28:44.532037 systemd-logind[1452]: Removed session 10. Nov 13 08:28:44.669786 systemd-networkd[1375]: lxc06cb20330852: Link UP Nov 13 08:28:44.675683 kernel: eth0: renamed from tmp1629e Nov 13 08:28:44.686396 systemd-networkd[1375]: lxc6d1fe61e4f3c: Link UP Nov 13 08:28:44.694635 kernel: eth0: renamed from tmpf2a09 Nov 13 08:28:44.689045 systemd-networkd[1375]: lxc06cb20330852: Gained carrier Nov 13 08:28:44.705461 systemd-networkd[1375]: lxc6d1fe61e4f3c: Gained carrier Nov 13 08:28:45.497216 systemd-networkd[1375]: lxc_health: Gained IPv6LL Nov 13 08:28:45.817133 systemd-networkd[1375]: lxc06cb20330852: Gained IPv6LL Nov 13 08:28:45.837934 kubelet[2667]: E1113 08:28:45.837861 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:45.874255 kubelet[2667]: I1113 08:28:45.874110 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-gxqdd" podStartSLOduration=7.889709128 podStartE2EDuration="26.87406388s" podCreationTimestamp="2024-11-13 08:28:19 +0000 UTC" firstStartedPulling="2024-11-13 08:28:20.491171361 +0000 UTC m=+14.748655033" lastFinishedPulling="2024-11-13 08:28:39.475526102 +0000 UTC m=+33.733009785" observedRunningTime="2024-11-13 08:28:40.351946478 +0000 UTC m=+34.609430177" watchObservedRunningTime="2024-11-13 08:28:45.87406388 +0000 UTC m=+40.131547574" Nov 13 08:28:46.277775 kubelet[2667]: E1113 08:28:46.277697 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:46.329132 systemd-networkd[1375]: lxc6d1fe61e4f3c: Gained IPv6LL Nov 13 08:28:47.279481 kubelet[2667]: E1113 08:28:47.279245 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:49.523768 systemd[1]: Started sshd@10-209.38.128.242:22-139.178.89.65:45574.service - OpenSSH per-connection server daemon (139.178.89.65:45574). Nov 13 08:28:49.647243 sshd[3888]: Accepted publickey for core from 139.178.89.65 port 45574 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:28:49.649992 sshd-session[3888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:28:49.658042 systemd-logind[1452]: New session 11 of user core. Nov 13 08:28:49.663224 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 13 08:28:49.908950 sshd[3890]: Connection closed by 139.178.89.65 port 45574 Nov 13 08:28:49.912265 sshd-session[3888]: pam_unix(sshd:session): session closed for user core Nov 13 08:28:49.918461 systemd[1]: sshd@10-209.38.128.242:22-139.178.89.65:45574.service: Deactivated successfully. Nov 13 08:28:49.922762 systemd[1]: session-11.scope: Deactivated successfully. Nov 13 08:28:49.926949 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Nov 13 08:28:49.929281 systemd-logind[1452]: Removed session 11. Nov 13 08:28:50.795877 containerd[1479]: time="2024-11-13T08:28:50.794107713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:28:50.795877 containerd[1479]: time="2024-11-13T08:28:50.794250966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:28:50.795877 containerd[1479]: time="2024-11-13T08:28:50.794294799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:50.795877 containerd[1479]: time="2024-11-13T08:28:50.794490662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:50.809091 containerd[1479]: time="2024-11-13T08:28:50.803980074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:28:50.809091 containerd[1479]: time="2024-11-13T08:28:50.804064395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:28:50.809091 containerd[1479]: time="2024-11-13T08:28:50.804080320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:50.809091 containerd[1479]: time="2024-11-13T08:28:50.804222396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:50.890574 systemd[1]: Started cri-containerd-f2a0994a4e26d638badfb971b4e9e2f84b432f975a9181207d58e68a7604647b.scope - libcontainer container f2a0994a4e26d638badfb971b4e9e2f84b432f975a9181207d58e68a7604647b. Nov 13 08:28:50.902232 systemd[1]: Started cri-containerd-1629edc4b08799cc0988b83e4daae522433d9ed34fbb48397d8e24dd0edaa093.scope - libcontainer container 1629edc4b08799cc0988b83e4daae522433d9ed34fbb48397d8e24dd0edaa093. Nov 13 08:28:51.000443 containerd[1479]: time="2024-11-13T08:28:51.000372650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qmlpn,Uid:0b548c57-6584-40c9-864e-7a91b8d1d436,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2a0994a4e26d638badfb971b4e9e2f84b432f975a9181207d58e68a7604647b\"" Nov 13 08:28:51.003377 kubelet[2667]: E1113 08:28:51.002385 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:51.008510 containerd[1479]: time="2024-11-13T08:28:51.008400225Z" level=info msg="CreateContainer within sandbox \"f2a0994a4e26d638badfb971b4e9e2f84b432f975a9181207d58e68a7604647b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 13 08:28:51.054829 containerd[1479]: time="2024-11-13T08:28:51.054569628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sqqnc,Uid:c70b9d7e-f879-4511-abe6-54ea38f711c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"1629edc4b08799cc0988b83e4daae522433d9ed34fbb48397d8e24dd0edaa093\"" Nov 13 08:28:51.055435 containerd[1479]: time="2024-11-13T08:28:51.055294407Z" level=info msg="CreateContainer within sandbox \"f2a0994a4e26d638badfb971b4e9e2f84b432f975a9181207d58e68a7604647b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a9532f62a975d9989963d95ae774714b3d46154a7546f4b986d74b184531a609\"" Nov 13 08:28:51.057055 containerd[1479]: time="2024-11-13T08:28:51.056522532Z" level=info msg="StartContainer for \"a9532f62a975d9989963d95ae774714b3d46154a7546f4b986d74b184531a609\"" Nov 13 08:28:51.061103 kubelet[2667]: E1113 08:28:51.060622 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:51.074543 containerd[1479]: time="2024-11-13T08:28:51.074329208Z" level=info msg="CreateContainer within sandbox \"1629edc4b08799cc0988b83e4daae522433d9ed34fbb48397d8e24dd0edaa093\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 13 08:28:51.101517 containerd[1479]: time="2024-11-13T08:28:51.100472783Z" level=info msg="CreateContainer within sandbox \"1629edc4b08799cc0988b83e4daae522433d9ed34fbb48397d8e24dd0edaa093\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3cadf0a58128832fc2766d9d636cc8709d6c57a0e6994b42175ea326b7f2d5dc\"" Nov 13 08:28:51.100650 systemd[1]: Started cri-containerd-a9532f62a975d9989963d95ae774714b3d46154a7546f4b986d74b184531a609.scope - libcontainer container a9532f62a975d9989963d95ae774714b3d46154a7546f4b986d74b184531a609. Nov 13 08:28:51.104683 containerd[1479]: time="2024-11-13T08:28:51.104630189Z" level=info msg="StartContainer for \"3cadf0a58128832fc2766d9d636cc8709d6c57a0e6994b42175ea326b7f2d5dc\"" Nov 13 08:28:51.156653 containerd[1479]: time="2024-11-13T08:28:51.156504664Z" level=info msg="StartContainer for \"a9532f62a975d9989963d95ae774714b3d46154a7546f4b986d74b184531a609\" returns successfully" Nov 13 08:28:51.166801 systemd[1]: Started cri-containerd-3cadf0a58128832fc2766d9d636cc8709d6c57a0e6994b42175ea326b7f2d5dc.scope - libcontainer container 3cadf0a58128832fc2766d9d636cc8709d6c57a0e6994b42175ea326b7f2d5dc. Nov 13 08:28:51.230386 containerd[1479]: time="2024-11-13T08:28:51.227755623Z" level=info msg="StartContainer for \"3cadf0a58128832fc2766d9d636cc8709d6c57a0e6994b42175ea326b7f2d5dc\" returns successfully" Nov 13 08:28:51.304283 kubelet[2667]: E1113 08:28:51.304180 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:51.311270 kubelet[2667]: E1113 08:28:51.311014 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:51.335371 kubelet[2667]: I1113 08:28:51.335284 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qmlpn" podStartSLOduration=32.335265566 podStartE2EDuration="32.335265566s" podCreationTimestamp="2024-11-13 08:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:28:51.332939606 +0000 UTC m=+45.590423282" watchObservedRunningTime="2024-11-13 08:28:51.335265566 +0000 UTC m=+45.592749564" Nov 13 08:28:51.372256 kubelet[2667]: I1113 08:28:51.370864 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-sqqnc" podStartSLOduration=32.37083222 podStartE2EDuration="32.37083222s" podCreationTimestamp="2024-11-13 08:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:28:51.367796829 +0000 UTC m=+45.625280517" watchObservedRunningTime="2024-11-13 08:28:51.37083222 +0000 UTC m=+45.628315901" Nov 13 08:28:51.809774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4183042344.mount: Deactivated successfully. Nov 13 08:28:52.314630 kubelet[2667]: E1113 08:28:52.314346 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:52.317570 kubelet[2667]: E1113 08:28:52.316345 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:53.316802 kubelet[2667]: E1113 08:28:53.316660 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:53.316802 kubelet[2667]: E1113 08:28:53.316660 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:54.933603 systemd[1]: Started sshd@11-209.38.128.242:22-139.178.89.65:45588.service - OpenSSH per-connection server daemon (139.178.89.65:45588). Nov 13 08:28:55.041327 sshd[4081]: Accepted publickey for core from 139.178.89.65 port 45588 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:28:55.043304 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:28:55.052290 systemd-logind[1452]: New session 12 of user core. Nov 13 08:28:55.061445 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 13 08:28:55.295785 sshd[4083]: Connection closed by 139.178.89.65 port 45588 Nov 13 08:28:55.296245 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Nov 13 08:28:55.301279 systemd[1]: sshd@11-209.38.128.242:22-139.178.89.65:45588.service: Deactivated successfully. Nov 13 08:28:55.304241 systemd[1]: session-12.scope: Deactivated successfully. Nov 13 08:28:55.305463 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Nov 13 08:28:55.306888 systemd-logind[1452]: Removed session 12. Nov 13 08:29:00.316498 systemd[1]: Started sshd@12-209.38.128.242:22-139.178.89.65:40430.service - OpenSSH per-connection server daemon (139.178.89.65:40430). Nov 13 08:29:00.393152 sshd[4098]: Accepted publickey for core from 139.178.89.65 port 40430 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:00.395012 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:00.401897 systemd-logind[1452]: New session 13 of user core. Nov 13 08:29:00.410304 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 13 08:29:00.571956 sshd[4100]: Connection closed by 139.178.89.65 port 40430 Nov 13 08:29:00.573334 sshd-session[4098]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:00.583486 systemd[1]: sshd@12-209.38.128.242:22-139.178.89.65:40430.service: Deactivated successfully. Nov 13 08:29:00.586893 systemd[1]: session-13.scope: Deactivated successfully. Nov 13 08:29:00.589623 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Nov 13 08:29:00.595594 systemd[1]: Started sshd@13-209.38.128.242:22-139.178.89.65:40446.service - OpenSSH per-connection server daemon (139.178.89.65:40446). Nov 13 08:29:00.599368 systemd-logind[1452]: Removed session 13. Nov 13 08:29:00.669472 sshd[4112]: Accepted publickey for core from 139.178.89.65 port 40446 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:00.671809 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:00.678524 systemd-logind[1452]: New session 14 of user core. Nov 13 08:29:00.688413 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 13 08:29:00.981471 sshd[4114]: Connection closed by 139.178.89.65 port 40446 Nov 13 08:29:00.986465 sshd-session[4112]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:00.999112 systemd[1]: sshd@13-209.38.128.242:22-139.178.89.65:40446.service: Deactivated successfully. Nov 13 08:29:01.004666 systemd[1]: session-14.scope: Deactivated successfully. Nov 13 08:29:01.009343 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Nov 13 08:29:01.024162 systemd[1]: Started sshd@14-209.38.128.242:22-139.178.89.65:40460.service - OpenSSH per-connection server daemon (139.178.89.65:40460). Nov 13 08:29:01.028988 systemd-logind[1452]: Removed session 14. Nov 13 08:29:01.118615 sshd[4123]: Accepted publickey for core from 139.178.89.65 port 40460 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:01.122078 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:01.131627 systemd-logind[1452]: New session 15 of user core. Nov 13 08:29:01.136476 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 13 08:29:01.317477 sshd[4125]: Connection closed by 139.178.89.65 port 40460 Nov 13 08:29:01.319743 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:01.323848 systemd[1]: sshd@14-209.38.128.242:22-139.178.89.65:40460.service: Deactivated successfully. Nov 13 08:29:01.328658 systemd[1]: session-15.scope: Deactivated successfully. Nov 13 08:29:01.331553 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Nov 13 08:29:01.333368 systemd-logind[1452]: Removed session 15. Nov 13 08:29:06.347472 systemd[1]: Started sshd@15-209.38.128.242:22-139.178.89.65:40470.service - OpenSSH per-connection server daemon (139.178.89.65:40470). Nov 13 08:29:06.417160 sshd[4139]: Accepted publickey for core from 139.178.89.65 port 40470 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:06.420599 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:06.428403 systemd-logind[1452]: New session 16 of user core. Nov 13 08:29:06.438404 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 13 08:29:06.625835 sshd[4141]: Connection closed by 139.178.89.65 port 40470 Nov 13 08:29:06.628330 sshd-session[4139]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:06.637204 systemd[1]: sshd@15-209.38.128.242:22-139.178.89.65:40470.service: Deactivated successfully. Nov 13 08:29:06.641177 systemd[1]: session-16.scope: Deactivated successfully. Nov 13 08:29:06.642669 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Nov 13 08:29:06.644777 systemd-logind[1452]: Removed session 16. Nov 13 08:29:11.652390 systemd[1]: Started sshd@16-209.38.128.242:22-139.178.89.65:36754.service - OpenSSH per-connection server daemon (139.178.89.65:36754). Nov 13 08:29:11.723036 sshd[4152]: Accepted publickey for core from 139.178.89.65 port 36754 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:11.725428 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:11.732664 systemd-logind[1452]: New session 17 of user core. Nov 13 08:29:11.743335 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 13 08:29:11.954052 sshd[4154]: Connection closed by 139.178.89.65 port 36754 Nov 13 08:29:11.957973 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:11.976549 systemd[1]: Started sshd@17-209.38.128.242:22-139.178.89.65:36764.service - OpenSSH per-connection server daemon (139.178.89.65:36764). Nov 13 08:29:11.977454 systemd[1]: sshd@16-209.38.128.242:22-139.178.89.65:36754.service: Deactivated successfully. Nov 13 08:29:11.988447 systemd[1]: session-17.scope: Deactivated successfully. Nov 13 08:29:11.993577 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Nov 13 08:29:11.997799 systemd-logind[1452]: Removed session 17. Nov 13 08:29:12.070978 sshd[4163]: Accepted publickey for core from 139.178.89.65 port 36764 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:12.074531 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:12.083672 systemd-logind[1452]: New session 18 of user core. Nov 13 08:29:12.090666 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 13 08:29:12.439938 sshd[4167]: Connection closed by 139.178.89.65 port 36764 Nov 13 08:29:12.441470 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:12.453295 systemd[1]: sshd@17-209.38.128.242:22-139.178.89.65:36764.service: Deactivated successfully. Nov 13 08:29:12.457866 systemd[1]: session-18.scope: Deactivated successfully. Nov 13 08:29:12.459633 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Nov 13 08:29:12.467509 systemd[1]: Started sshd@18-209.38.128.242:22-139.178.89.65:36780.service - OpenSSH per-connection server daemon (139.178.89.65:36780). Nov 13 08:29:12.471474 systemd-logind[1452]: Removed session 18. Nov 13 08:29:12.547563 sshd[4177]: Accepted publickey for core from 139.178.89.65 port 36780 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:12.549873 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:12.558886 systemd-logind[1452]: New session 19 of user core. Nov 13 08:29:12.567251 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 13 08:29:14.692622 sshd[4179]: Connection closed by 139.178.89.65 port 36780 Nov 13 08:29:14.693336 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:14.716414 systemd[1]: sshd@18-209.38.128.242:22-139.178.89.65:36780.service: Deactivated successfully. Nov 13 08:29:14.718895 systemd[1]: session-19.scope: Deactivated successfully. Nov 13 08:29:14.722738 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Nov 13 08:29:14.736416 systemd[1]: Started sshd@19-209.38.128.242:22-139.178.89.65:36784.service - OpenSSH per-connection server daemon (139.178.89.65:36784). Nov 13 08:29:14.737518 systemd-logind[1452]: Removed session 19. Nov 13 08:29:14.845202 sshd[4195]: Accepted publickey for core from 139.178.89.65 port 36784 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:14.847482 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:14.858543 systemd-logind[1452]: New session 20 of user core. Nov 13 08:29:14.863316 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 13 08:29:15.502231 sshd[4205]: Connection closed by 139.178.89.65 port 36784 Nov 13 08:29:15.504195 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:15.516144 systemd[1]: sshd@19-209.38.128.242:22-139.178.89.65:36784.service: Deactivated successfully. Nov 13 08:29:15.522097 systemd[1]: session-20.scope: Deactivated successfully. Nov 13 08:29:15.524354 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Nov 13 08:29:15.534684 systemd[1]: Started sshd@20-209.38.128.242:22-139.178.89.65:36790.service - OpenSSH per-connection server daemon (139.178.89.65:36790). Nov 13 08:29:15.537212 systemd-logind[1452]: Removed session 20. Nov 13 08:29:15.601667 sshd[4213]: Accepted publickey for core from 139.178.89.65 port 36790 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:15.604995 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:15.614053 systemd-logind[1452]: New session 21 of user core. Nov 13 08:29:15.620324 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 13 08:29:15.793283 sshd[4215]: Connection closed by 139.178.89.65 port 36790 Nov 13 08:29:15.794496 sshd-session[4213]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:15.798736 systemd[1]: sshd@20-209.38.128.242:22-139.178.89.65:36790.service: Deactivated successfully. Nov 13 08:29:15.803846 systemd[1]: session-21.scope: Deactivated successfully. Nov 13 08:29:15.811071 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Nov 13 08:29:15.815619 systemd-logind[1452]: Removed session 21. Nov 13 08:29:20.826716 systemd[1]: Started sshd@21-209.38.128.242:22-139.178.89.65:45058.service - OpenSSH per-connection server daemon (139.178.89.65:45058). Nov 13 08:29:20.892974 sshd[4231]: Accepted publickey for core from 139.178.89.65 port 45058 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:20.895145 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:20.903401 systemd-logind[1452]: New session 22 of user core. Nov 13 08:29:20.914473 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 13 08:29:21.073413 sshd[4233]: Connection closed by 139.178.89.65 port 45058 Nov 13 08:29:21.073241 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:21.078838 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Nov 13 08:29:21.079356 systemd[1]: sshd@21-209.38.128.242:22-139.178.89.65:45058.service: Deactivated successfully. Nov 13 08:29:21.083956 systemd[1]: session-22.scope: Deactivated successfully. Nov 13 08:29:21.087874 systemd-logind[1452]: Removed session 22. Nov 13 08:29:22.956244 kubelet[2667]: E1113 08:29:22.956168 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:29:26.092364 systemd[1]: Started sshd@22-209.38.128.242:22-139.178.89.65:45070.service - OpenSSH per-connection server daemon (139.178.89.65:45070). Nov 13 08:29:26.179610 sshd[4244]: Accepted publickey for core from 139.178.89.65 port 45070 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:26.182367 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:26.192348 systemd-logind[1452]: New session 23 of user core. Nov 13 08:29:26.195164 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 13 08:29:26.382248 sshd[4246]: Connection closed by 139.178.89.65 port 45070 Nov 13 08:29:26.385848 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:26.392925 systemd[1]: sshd@22-209.38.128.242:22-139.178.89.65:45070.service: Deactivated successfully. Nov 13 08:29:26.398080 systemd[1]: session-23.scope: Deactivated successfully. Nov 13 08:29:26.399507 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. Nov 13 08:29:26.401513 systemd-logind[1452]: Removed session 23. Nov 13 08:29:26.955837 kubelet[2667]: E1113 08:29:26.955711 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:29:31.412896 systemd[1]: Started sshd@23-209.38.128.242:22-139.178.89.65:58024.service - OpenSSH per-connection server daemon (139.178.89.65:58024). Nov 13 08:29:31.533142 sshd[4257]: Accepted publickey for core from 139.178.89.65 port 58024 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:31.536244 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:31.547486 systemd-logind[1452]: New session 24 of user core. Nov 13 08:29:31.554408 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 13 08:29:31.785468 sshd[4259]: Connection closed by 139.178.89.65 port 58024 Nov 13 08:29:31.788126 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:31.799978 systemd[1]: sshd@23-209.38.128.242:22-139.178.89.65:58024.service: Deactivated successfully. Nov 13 08:29:31.804163 systemd[1]: session-24.scope: Deactivated successfully. Nov 13 08:29:31.809969 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. Nov 13 08:29:31.818722 systemd[1]: Started sshd@24-209.38.128.242:22-139.178.89.65:58026.service - OpenSSH per-connection server daemon (139.178.89.65:58026). Nov 13 08:29:31.821509 systemd-logind[1452]: Removed session 24. Nov 13 08:29:31.944278 sshd[4270]: Accepted publickey for core from 139.178.89.65 port 58026 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:31.946857 sshd-session[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:31.954235 systemd-logind[1452]: New session 25 of user core. Nov 13 08:29:31.961599 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 13 08:29:34.015905 containerd[1479]: time="2024-11-13T08:29:34.014986171Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 13 08:29:34.043924 containerd[1479]: time="2024-11-13T08:29:34.043792259Z" level=info msg="StopContainer for \"cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e\" with timeout 2 (s)" Nov 13 08:29:34.045807 containerd[1479]: time="2024-11-13T08:29:34.045661245Z" level=info msg="StopContainer for \"391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e\" with timeout 30 (s)" Nov 13 08:29:34.048561 containerd[1479]: time="2024-11-13T08:29:34.048506896Z" level=info msg="Stop container \"cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e\" with signal terminated" Nov 13 08:29:34.049719 containerd[1479]: time="2024-11-13T08:29:34.048893242Z" level=info msg="Stop container \"391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e\" with signal terminated" Nov 13 08:29:34.063751 systemd[1]: cri-containerd-391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e.scope: Deactivated successfully. Nov 13 08:29:34.074498 systemd-networkd[1375]: lxc_health: Link DOWN Nov 13 08:29:34.074513 systemd-networkd[1375]: lxc_health: Lost carrier Nov 13 08:29:34.109706 systemd[1]: cri-containerd-cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e.scope: Deactivated successfully. Nov 13 08:29:34.110290 systemd[1]: cri-containerd-cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e.scope: Consumed 10.467s CPU time. Nov 13 08:29:34.154535 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e-rootfs.mount: Deactivated successfully. Nov 13 08:29:34.172822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e-rootfs.mount: Deactivated successfully. Nov 13 08:29:34.180337 containerd[1479]: time="2024-11-13T08:29:34.180001895Z" level=info msg="shim disconnected" id=cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e namespace=k8s.io Nov 13 08:29:34.180337 containerd[1479]: time="2024-11-13T08:29:34.180102124Z" level=warning msg="cleaning up after shim disconnected" id=cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e namespace=k8s.io Nov 13 08:29:34.180337 containerd[1479]: time="2024-11-13T08:29:34.180116791Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:29:34.181736 containerd[1479]: time="2024-11-13T08:29:34.181525770Z" level=info msg="shim disconnected" id=391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e namespace=k8s.io Nov 13 08:29:34.181736 containerd[1479]: time="2024-11-13T08:29:34.181629286Z" level=warning msg="cleaning up after shim disconnected" id=391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e namespace=k8s.io Nov 13 08:29:34.181736 containerd[1479]: time="2024-11-13T08:29:34.181659229Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:29:34.211104 containerd[1479]: time="2024-11-13T08:29:34.211024401Z" level=warning msg="cleanup warnings time=\"2024-11-13T08:29:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 13 08:29:34.214838 containerd[1479]: time="2024-11-13T08:29:34.214512750Z" level=info msg="StopContainer for \"391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e\" returns successfully" Nov 13 08:29:34.215889 containerd[1479]: time="2024-11-13T08:29:34.215724824Z" level=info msg="StopPodSandbox for \"d883704de844e456c19638ce85d8a903ace61e9c5a7e1edf97792ad586714539\"" Nov 13 08:29:34.216268 containerd[1479]: time="2024-11-13T08:29:34.215792155Z" level=info msg="Container to stop \"391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 08:29:34.216268 containerd[1479]: time="2024-11-13T08:29:34.216038508Z" level=info msg="StopContainer for \"cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e\" returns successfully" Nov 13 08:29:34.219886 containerd[1479]: time="2024-11-13T08:29:34.216675931Z" level=info msg="StopPodSandbox for \"1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f\"" Nov 13 08:29:34.219886 containerd[1479]: time="2024-11-13T08:29:34.216727729Z" level=info msg="Container to stop \"d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 08:29:34.219886 containerd[1479]: time="2024-11-13T08:29:34.216774416Z" level=info msg="Container to stop \"c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 08:29:34.219886 containerd[1479]: time="2024-11-13T08:29:34.216790251Z" level=info msg="Container to stop \"ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 08:29:34.219886 containerd[1479]: time="2024-11-13T08:29:34.216819783Z" level=info msg="Container to stop \"665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 08:29:34.219886 containerd[1479]: time="2024-11-13T08:29:34.216834204Z" level=info msg="Container to stop \"cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 08:29:34.219326 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d883704de844e456c19638ce85d8a903ace61e9c5a7e1edf97792ad586714539-shm.mount: Deactivated successfully. Nov 13 08:29:34.225880 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f-shm.mount: Deactivated successfully. Nov 13 08:29:34.247199 systemd[1]: cri-containerd-1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f.scope: Deactivated successfully. Nov 13 08:29:34.260007 systemd[1]: cri-containerd-d883704de844e456c19638ce85d8a903ace61e9c5a7e1edf97792ad586714539.scope: Deactivated successfully. Nov 13 08:29:34.307534 containerd[1479]: time="2024-11-13T08:29:34.307326257Z" level=info msg="shim disconnected" id=1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f namespace=k8s.io Nov 13 08:29:34.308444 containerd[1479]: time="2024-11-13T08:29:34.308088732Z" level=warning msg="cleaning up after shim disconnected" id=1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f namespace=k8s.io Nov 13 08:29:34.308444 containerd[1479]: time="2024-11-13T08:29:34.308137319Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:29:34.317614 containerd[1479]: time="2024-11-13T08:29:34.317497265Z" level=info msg="shim disconnected" id=d883704de844e456c19638ce85d8a903ace61e9c5a7e1edf97792ad586714539 namespace=k8s.io Nov 13 08:29:34.317614 containerd[1479]: time="2024-11-13T08:29:34.317603525Z" level=warning msg="cleaning up after shim disconnected" id=d883704de844e456c19638ce85d8a903ace61e9c5a7e1edf97792ad586714539 namespace=k8s.io Nov 13 08:29:34.317614 containerd[1479]: time="2024-11-13T08:29:34.317621800Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:29:34.344961 containerd[1479]: time="2024-11-13T08:29:34.344849165Z" level=info msg="TearDown network for sandbox \"1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f\" successfully" Nov 13 08:29:34.344961 containerd[1479]: time="2024-11-13T08:29:34.344893963Z" level=info msg="StopPodSandbox for \"1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f\" returns successfully" Nov 13 08:29:34.357317 containerd[1479]: time="2024-11-13T08:29:34.357251621Z" level=info msg="TearDown network for sandbox \"d883704de844e456c19638ce85d8a903ace61e9c5a7e1edf97792ad586714539\" successfully" Nov 13 08:29:34.358288 containerd[1479]: time="2024-11-13T08:29:34.357636889Z" level=info msg="StopPodSandbox for \"d883704de844e456c19638ce85d8a903ace61e9c5a7e1edf97792ad586714539\" returns successfully" Nov 13 08:29:34.430603 kubelet[2667]: I1113 08:29:34.429026 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-cilium-run\") pod \"15ba1b0d-69b7-450f-9421-04bef59857dc\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " Nov 13 08:29:34.430603 kubelet[2667]: I1113 08:29:34.429118 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-hostproc\") pod \"15ba1b0d-69b7-450f-9421-04bef59857dc\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " Nov 13 08:29:34.430603 kubelet[2667]: I1113 08:29:34.429153 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-bpf-maps\") pod \"15ba1b0d-69b7-450f-9421-04bef59857dc\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " Nov 13 08:29:34.430603 kubelet[2667]: I1113 08:29:34.429196 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15ba1b0d-69b7-450f-9421-04bef59857dc-cilium-config-path\") pod \"15ba1b0d-69b7-450f-9421-04bef59857dc\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " Nov 13 08:29:34.430603 kubelet[2667]: I1113 08:29:34.429291 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-etc-cni-netd\") pod \"15ba1b0d-69b7-450f-9421-04bef59857dc\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " Nov 13 08:29:34.430603 kubelet[2667]: I1113 08:29:34.429324 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-cilium-cgroup\") pod \"15ba1b0d-69b7-450f-9421-04bef59857dc\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " Nov 13 08:29:34.431523 kubelet[2667]: I1113 08:29:34.429363 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15ba1b0d-69b7-450f-9421-04bef59857dc-clustermesh-secrets\") pod \"15ba1b0d-69b7-450f-9421-04bef59857dc\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " Nov 13 08:29:34.431523 kubelet[2667]: I1113 08:29:34.429390 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-host-proc-sys-kernel\") pod \"15ba1b0d-69b7-450f-9421-04bef59857dc\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " Nov 13 08:29:34.431523 kubelet[2667]: I1113 08:29:34.429419 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-host-proc-sys-net\") pod \"15ba1b0d-69b7-450f-9421-04bef59857dc\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " Nov 13 08:29:34.431523 kubelet[2667]: I1113 08:29:34.429571 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-lib-modules\") pod \"15ba1b0d-69b7-450f-9421-04bef59857dc\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " Nov 13 08:29:34.431523 kubelet[2667]: I1113 08:29:34.429607 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-xtables-lock\") pod \"15ba1b0d-69b7-450f-9421-04bef59857dc\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " Nov 13 08:29:34.431523 kubelet[2667]: I1113 08:29:34.429643 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zwgw\" (UniqueName: \"kubernetes.io/projected/15ba1b0d-69b7-450f-9421-04bef59857dc-kube-api-access-2zwgw\") pod \"15ba1b0d-69b7-450f-9421-04bef59857dc\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " Nov 13 08:29:34.431895 kubelet[2667]: I1113 08:29:34.429672 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-cni-path\") pod \"15ba1b0d-69b7-450f-9421-04bef59857dc\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " Nov 13 08:29:34.431895 kubelet[2667]: I1113 08:29:34.429703 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15ba1b0d-69b7-450f-9421-04bef59857dc-hubble-tls\") pod \"15ba1b0d-69b7-450f-9421-04bef59857dc\" (UID: \"15ba1b0d-69b7-450f-9421-04bef59857dc\") " Nov 13 08:29:34.436095 kubelet[2667]: I1113 08:29:34.435987 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "15ba1b0d-69b7-450f-9421-04bef59857dc" (UID: "15ba1b0d-69b7-450f-9421-04bef59857dc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:29:34.436095 kubelet[2667]: I1113 08:29:34.436106 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "15ba1b0d-69b7-450f-9421-04bef59857dc" (UID: "15ba1b0d-69b7-450f-9421-04bef59857dc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:29:34.437004 kubelet[2667]: I1113 08:29:34.436140 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "15ba1b0d-69b7-450f-9421-04bef59857dc" (UID: "15ba1b0d-69b7-450f-9421-04bef59857dc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:29:34.437004 kubelet[2667]: I1113 08:29:34.436165 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "15ba1b0d-69b7-450f-9421-04bef59857dc" (UID: "15ba1b0d-69b7-450f-9421-04bef59857dc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:29:34.437004 kubelet[2667]: I1113 08:29:34.435574 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "15ba1b0d-69b7-450f-9421-04bef59857dc" (UID: "15ba1b0d-69b7-450f-9421-04bef59857dc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:29:34.437004 kubelet[2667]: I1113 08:29:34.436695 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-hostproc" (OuterVolumeSpecName: "hostproc") pod "15ba1b0d-69b7-450f-9421-04bef59857dc" (UID: "15ba1b0d-69b7-450f-9421-04bef59857dc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:29:34.437004 kubelet[2667]: I1113 08:29:34.436740 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "15ba1b0d-69b7-450f-9421-04bef59857dc" (UID: "15ba1b0d-69b7-450f-9421-04bef59857dc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:29:34.437933 kubelet[2667]: I1113 08:29:34.437628 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "15ba1b0d-69b7-450f-9421-04bef59857dc" (UID: "15ba1b0d-69b7-450f-9421-04bef59857dc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:29:34.438760 kubelet[2667]: I1113 08:29:34.437813 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "15ba1b0d-69b7-450f-9421-04bef59857dc" (UID: "15ba1b0d-69b7-450f-9421-04bef59857dc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:29:34.438760 kubelet[2667]: I1113 08:29:34.438693 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-cni-path" (OuterVolumeSpecName: "cni-path") pod "15ba1b0d-69b7-450f-9421-04bef59857dc" (UID: "15ba1b0d-69b7-450f-9421-04bef59857dc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:29:34.446114 kubelet[2667]: I1113 08:29:34.446035 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15ba1b0d-69b7-450f-9421-04bef59857dc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "15ba1b0d-69b7-450f-9421-04bef59857dc" (UID: "15ba1b0d-69b7-450f-9421-04bef59857dc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 13 08:29:34.446953 kubelet[2667]: I1113 08:29:34.446847 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15ba1b0d-69b7-450f-9421-04bef59857dc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "15ba1b0d-69b7-450f-9421-04bef59857dc" (UID: "15ba1b0d-69b7-450f-9421-04bef59857dc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 13 08:29:34.451828 kubelet[2667]: I1113 08:29:34.451682 2667 scope.go:117] "RemoveContainer" containerID="391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e" Nov 13 08:29:34.455787 kubelet[2667]: I1113 08:29:34.455551 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15ba1b0d-69b7-450f-9421-04bef59857dc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "15ba1b0d-69b7-450f-9421-04bef59857dc" (UID: "15ba1b0d-69b7-450f-9421-04bef59857dc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 13 08:29:34.460467 kubelet[2667]: I1113 08:29:34.460118 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15ba1b0d-69b7-450f-9421-04bef59857dc-kube-api-access-2zwgw" (OuterVolumeSpecName: "kube-api-access-2zwgw") pod "15ba1b0d-69b7-450f-9421-04bef59857dc" (UID: "15ba1b0d-69b7-450f-9421-04bef59857dc"). InnerVolumeSpecName "kube-api-access-2zwgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 13 08:29:34.469175 containerd[1479]: time="2024-11-13T08:29:34.469060597Z" level=info msg="RemoveContainer for \"391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e\"" Nov 13 08:29:34.487035 containerd[1479]: time="2024-11-13T08:29:34.486839935Z" level=info msg="RemoveContainer for \"391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e\" returns successfully" Nov 13 08:29:34.487711 kubelet[2667]: I1113 08:29:34.487471 2667 scope.go:117] "RemoveContainer" containerID="391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e" Nov 13 08:29:34.488232 containerd[1479]: time="2024-11-13T08:29:34.488107375Z" level=error msg="ContainerStatus for \"391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e\": not found" Nov 13 08:29:34.489140 kubelet[2667]: E1113 08:29:34.488655 2667 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e\": not found" containerID="391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e" Nov 13 08:29:34.497541 kubelet[2667]: I1113 08:29:34.488720 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e"} err="failed to get container status \"391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e\": rpc error: code = NotFound desc = an error occurred when try to find container \"391274fcd83b6b4342cb3cd537d761288a4d995a1a928afbd2547825669d012e\": not found" Nov 13 08:29:34.498630 kubelet[2667]: I1113 08:29:34.498050 2667 scope.go:117] "RemoveContainer" containerID="cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e" Nov 13 08:29:34.502258 systemd[1]: Removed slice kubepods-burstable-pod15ba1b0d_69b7_450f_9421_04bef59857dc.slice - libcontainer container kubepods-burstable-pod15ba1b0d_69b7_450f_9421_04bef59857dc.slice. Nov 13 08:29:34.503211 systemd[1]: kubepods-burstable-pod15ba1b0d_69b7_450f_9421_04bef59857dc.slice: Consumed 10.603s CPU time. Nov 13 08:29:34.505593 containerd[1479]: time="2024-11-13T08:29:34.505476280Z" level=info msg="RemoveContainer for \"cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e\"" Nov 13 08:29:34.511364 containerd[1479]: time="2024-11-13T08:29:34.511277313Z" level=info msg="RemoveContainer for \"cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e\" returns successfully" Nov 13 08:29:34.512084 kubelet[2667]: I1113 08:29:34.512041 2667 scope.go:117] "RemoveContainer" containerID="665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3" Nov 13 08:29:34.516453 containerd[1479]: time="2024-11-13T08:29:34.515246555Z" level=info msg="RemoveContainer for \"665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3\"" Nov 13 08:29:34.530353 containerd[1479]: time="2024-11-13T08:29:34.530168711Z" level=info msg="RemoveContainer for \"665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3\" returns successfully" Nov 13 08:29:34.530584 kubelet[2667]: I1113 08:29:34.530465 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9jfd\" (UniqueName: \"kubernetes.io/projected/19e53ac0-5b84-459e-bba2-e0a2e93677d4-kube-api-access-m9jfd\") pod \"19e53ac0-5b84-459e-bba2-e0a2e93677d4\" (UID: \"19e53ac0-5b84-459e-bba2-e0a2e93677d4\") " Nov 13 08:29:34.530584 kubelet[2667]: I1113 08:29:34.530530 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19e53ac0-5b84-459e-bba2-e0a2e93677d4-cilium-config-path\") pod \"19e53ac0-5b84-459e-bba2-e0a2e93677d4\" (UID: \"19e53ac0-5b84-459e-bba2-e0a2e93677d4\") " Nov 13 08:29:34.530713 kubelet[2667]: I1113 08:29:34.530591 2667 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15ba1b0d-69b7-450f-9421-04bef59857dc-clustermesh-secrets\") on node \"ci-4152.0.0-d-03c8fd271e\" DevicePath \"\"" Nov 13 08:29:34.530713 kubelet[2667]: I1113 08:29:34.530612 2667 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-host-proc-sys-kernel\") on node \"ci-4152.0.0-d-03c8fd271e\" DevicePath \"\"" Nov 13 08:29:34.530713 kubelet[2667]: I1113 08:29:34.530632 2667 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-host-proc-sys-net\") on node \"ci-4152.0.0-d-03c8fd271e\" DevicePath \"\"" Nov 13 08:29:34.533213 kubelet[2667]: I1113 08:29:34.532413 2667 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-lib-modules\") on node \"ci-4152.0.0-d-03c8fd271e\" DevicePath \"\"" Nov 13 08:29:34.533213 kubelet[2667]: I1113 08:29:34.532582 2667 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-xtables-lock\") on node \"ci-4152.0.0-d-03c8fd271e\" DevicePath \"\"" Nov 13 08:29:34.533213 kubelet[2667]: I1113 08:29:34.532606 2667 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2zwgw\" (UniqueName: \"kubernetes.io/projected/15ba1b0d-69b7-450f-9421-04bef59857dc-kube-api-access-2zwgw\") on node \"ci-4152.0.0-d-03c8fd271e\" DevicePath \"\"" Nov 13 08:29:34.533213 kubelet[2667]: I1113 08:29:34.532623 2667 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-cni-path\") on node \"ci-4152.0.0-d-03c8fd271e\" DevicePath \"\"" Nov 13 08:29:34.533213 kubelet[2667]: I1113 08:29:34.532639 2667 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15ba1b0d-69b7-450f-9421-04bef59857dc-hubble-tls\") on node \"ci-4152.0.0-d-03c8fd271e\" DevicePath \"\"" Nov 13 08:29:34.533213 kubelet[2667]: I1113 08:29:34.532661 2667 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-cilium-run\") on node \"ci-4152.0.0-d-03c8fd271e\" DevicePath \"\"" Nov 13 08:29:34.533213 kubelet[2667]: I1113 08:29:34.532679 2667 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-hostproc\") on node \"ci-4152.0.0-d-03c8fd271e\" DevicePath \"\"" Nov 13 08:29:34.533213 kubelet[2667]: I1113 08:29:34.532694 2667 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-bpf-maps\") on node \"ci-4152.0.0-d-03c8fd271e\" DevicePath \"\"" Nov 13 08:29:34.533633 kubelet[2667]: I1113 08:29:34.532729 2667 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15ba1b0d-69b7-450f-9421-04bef59857dc-cilium-config-path\") on node \"ci-4152.0.0-d-03c8fd271e\" DevicePath \"\"" Nov 13 08:29:34.533633 kubelet[2667]: I1113 08:29:34.532744 2667 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-etc-cni-netd\") on node \"ci-4152.0.0-d-03c8fd271e\" DevicePath \"\"" Nov 13 08:29:34.533633 kubelet[2667]: I1113 08:29:34.532764 2667 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15ba1b0d-69b7-450f-9421-04bef59857dc-cilium-cgroup\") on node \"ci-4152.0.0-d-03c8fd271e\" DevicePath \"\"" Nov 13 08:29:34.540963 kubelet[2667]: I1113 08:29:34.539824 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e53ac0-5b84-459e-bba2-e0a2e93677d4-kube-api-access-m9jfd" (OuterVolumeSpecName: "kube-api-access-m9jfd") pod "19e53ac0-5b84-459e-bba2-e0a2e93677d4" (UID: "19e53ac0-5b84-459e-bba2-e0a2e93677d4"). InnerVolumeSpecName "kube-api-access-m9jfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 13 08:29:34.540963 kubelet[2667]: I1113 08:29:34.540479 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19e53ac0-5b84-459e-bba2-e0a2e93677d4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "19e53ac0-5b84-459e-bba2-e0a2e93677d4" (UID: "19e53ac0-5b84-459e-bba2-e0a2e93677d4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 13 08:29:34.540963 kubelet[2667]: I1113 08:29:34.540723 2667 scope.go:117] "RemoveContainer" containerID="ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3" Nov 13 08:29:34.543801 containerd[1479]: time="2024-11-13T08:29:34.543744756Z" level=info msg="RemoveContainer for \"ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3\"" Nov 13 08:29:34.552797 containerd[1479]: time="2024-11-13T08:29:34.552725073Z" level=info msg="RemoveContainer for \"ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3\" returns successfully" Nov 13 08:29:34.553561 kubelet[2667]: I1113 08:29:34.553466 2667 scope.go:117] "RemoveContainer" containerID="c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba" Nov 13 08:29:34.556998 containerd[1479]: time="2024-11-13T08:29:34.556488791Z" level=info msg="RemoveContainer for \"c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba\"" Nov 13 08:29:34.566156 containerd[1479]: time="2024-11-13T08:29:34.565895318Z" level=info msg="RemoveContainer for \"c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba\" returns successfully" Nov 13 08:29:34.567643 kubelet[2667]: I1113 08:29:34.567551 2667 scope.go:117] "RemoveContainer" containerID="d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279" Nov 13 08:29:34.569970 containerd[1479]: time="2024-11-13T08:29:34.569800207Z" level=info msg="RemoveContainer for \"d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279\"" Nov 13 08:29:34.574957 containerd[1479]: time="2024-11-13T08:29:34.574815145Z" level=info msg="RemoveContainer for \"d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279\" returns successfully" Nov 13 08:29:34.575251 kubelet[2667]: I1113 08:29:34.575230 2667 scope.go:117] "RemoveContainer" containerID="cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e" Nov 13 08:29:34.576047 containerd[1479]: time="2024-11-13T08:29:34.575981441Z" level=error msg="ContainerStatus for \"cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e\": not found" Nov 13 08:29:34.576496 kubelet[2667]: E1113 08:29:34.576437 2667 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e\": not found" containerID="cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e" Nov 13 08:29:34.576593 kubelet[2667]: I1113 08:29:34.576483 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e"} err="failed to get container status \"cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb0d27c35ccbb062c4c0f67e0c6482f9f19e52d3e75a7d65ee0b6fa4bf06636e\": not found" Nov 13 08:29:34.576593 kubelet[2667]: I1113 08:29:34.576545 2667 scope.go:117] "RemoveContainer" containerID="665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3" Nov 13 08:29:34.576893 containerd[1479]: time="2024-11-13T08:29:34.576811927Z" level=error msg="ContainerStatus for \"665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3\": not found" Nov 13 08:29:34.577363 kubelet[2667]: E1113 08:29:34.577049 2667 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3\": not found" containerID="665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3" Nov 13 08:29:34.577363 kubelet[2667]: I1113 08:29:34.577083 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3"} err="failed to get container status \"665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3\": rpc error: code = NotFound desc = an error occurred when try to find container \"665da9008a37eec205906d4c941dfddb76328c5b1cb1c7e3ebf32ce076d6b8b3\": not found" Nov 13 08:29:34.577363 kubelet[2667]: I1113 08:29:34.577111 2667 scope.go:117] "RemoveContainer" containerID="ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3" Nov 13 08:29:34.578001 containerd[1479]: time="2024-11-13T08:29:34.577653670Z" level=error msg="ContainerStatus for \"ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3\": not found" Nov 13 08:29:34.578106 kubelet[2667]: E1113 08:29:34.578076 2667 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3\": not found" containerID="ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3" Nov 13 08:29:34.578172 kubelet[2667]: I1113 08:29:34.578114 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3"} err="failed to get container status \"ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad874a07aeafdfa29cd6da759fc80d73d78a74b6684983a18d77a4cd219d5de3\": not found" Nov 13 08:29:34.578172 kubelet[2667]: I1113 08:29:34.578150 2667 scope.go:117] "RemoveContainer" containerID="c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba" Nov 13 08:29:34.578850 containerd[1479]: time="2024-11-13T08:29:34.578574849Z" level=error msg="ContainerStatus for \"c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba\": not found" Nov 13 08:29:34.579449 kubelet[2667]: E1113 08:29:34.579062 2667 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba\": not found" containerID="c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba" Nov 13 08:29:34.579449 kubelet[2667]: I1113 08:29:34.579111 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba"} err="failed to get container status \"c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"c303de109d2a7703b7f6b24e682eb1b5316cbe05c002639a41f83d256119a1ba\": not found" Nov 13 08:29:34.579449 kubelet[2667]: I1113 08:29:34.579142 2667 scope.go:117] "RemoveContainer" containerID="d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279" Nov 13 08:29:34.580026 containerd[1479]: time="2024-11-13T08:29:34.579885905Z" level=error msg="ContainerStatus for \"d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279\": not found" Nov 13 08:29:34.580192 kubelet[2667]: E1113 08:29:34.580153 2667 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279\": not found" containerID="d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279" Nov 13 08:29:34.580257 kubelet[2667]: I1113 08:29:34.580199 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279"} err="failed to get container status \"d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279\": rpc error: code = NotFound desc = an error occurred when try to find container \"d80a5dfb3d14cca5024412851115b9612fac38147f8cfc942578fee389051279\": not found" Nov 13 08:29:34.633796 kubelet[2667]: I1113 08:29:34.633682 2667 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-m9jfd\" (UniqueName: \"kubernetes.io/projected/19e53ac0-5b84-459e-bba2-e0a2e93677d4-kube-api-access-m9jfd\") on node \"ci-4152.0.0-d-03c8fd271e\" DevicePath \"\"" Nov 13 08:29:34.633796 kubelet[2667]: I1113 08:29:34.633741 2667 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19e53ac0-5b84-459e-bba2-e0a2e93677d4-cilium-config-path\") on node \"ci-4152.0.0-d-03c8fd271e\" DevicePath \"\"" Nov 13 08:29:34.758891 systemd[1]: Removed slice kubepods-besteffort-pod19e53ac0_5b84_459e_bba2_e0a2e93677d4.slice - libcontainer container kubepods-besteffort-pod19e53ac0_5b84_459e_bba2_e0a2e93677d4.slice. Nov 13 08:29:34.987437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d883704de844e456c19638ce85d8a903ace61e9c5a7e1edf97792ad586714539-rootfs.mount: Deactivated successfully. Nov 13 08:29:34.987619 systemd[1]: var-lib-kubelet-pods-19e53ac0\x2d5b84\x2d459e\x2dbba2\x2de0a2e93677d4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm9jfd.mount: Deactivated successfully. Nov 13 08:29:34.987714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1454ad8c586660d4c58444135d005546258c6174e6486244a42b527840b7ae4f-rootfs.mount: Deactivated successfully. Nov 13 08:29:34.987810 systemd[1]: var-lib-kubelet-pods-15ba1b0d\x2d69b7\x2d450f\x2d9421\x2d04bef59857dc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2zwgw.mount: Deactivated successfully. Nov 13 08:29:34.987899 systemd[1]: var-lib-kubelet-pods-15ba1b0d\x2d69b7\x2d450f\x2d9421\x2d04bef59857dc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 13 08:29:34.988557 systemd[1]: var-lib-kubelet-pods-15ba1b0d\x2d69b7\x2d450f\x2d9421\x2d04bef59857dc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 13 08:29:35.835551 sshd[4272]: Connection closed by 139.178.89.65 port 58026 Nov 13 08:29:35.837348 sshd-session[4270]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:35.848943 systemd[1]: sshd@24-209.38.128.242:22-139.178.89.65:58026.service: Deactivated successfully. Nov 13 08:29:35.854409 systemd[1]: session-25.scope: Deactivated successfully. Nov 13 08:29:35.855061 systemd[1]: session-25.scope: Consumed 1.136s CPU time. Nov 13 08:29:35.856315 systemd-logind[1452]: Session 25 logged out. Waiting for processes to exit. Nov 13 08:29:35.866746 systemd[1]: Started sshd@25-209.38.128.242:22-139.178.89.65:58042.service - OpenSSH per-connection server daemon (139.178.89.65:58042). Nov 13 08:29:35.872569 systemd-logind[1452]: Removed session 25. Nov 13 08:29:35.947461 sshd[4429]: Accepted publickey for core from 139.178.89.65 port 58042 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:35.950126 sshd-session[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:35.963097 systemd-logind[1452]: New session 26 of user core. Nov 13 08:29:35.967549 kubelet[2667]: I1113 08:29:35.967482 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15ba1b0d-69b7-450f-9421-04bef59857dc" path="/var/lib/kubelet/pods/15ba1b0d-69b7-450f-9421-04bef59857dc/volumes" Nov 13 08:29:35.971344 kubelet[2667]: I1113 08:29:35.968754 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19e53ac0-5b84-459e-bba2-e0a2e93677d4" path="/var/lib/kubelet/pods/19e53ac0-5b84-459e-bba2-e0a2e93677d4/volumes" Nov 13 08:29:35.971309 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 13 08:29:36.257117 kubelet[2667]: E1113 08:29:36.256980 2667 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 13 08:29:37.022480 sshd[4431]: Connection closed by 139.178.89.65 port 58042 Nov 13 08:29:37.023469 sshd-session[4429]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:37.044538 systemd[1]: sshd@25-209.38.128.242:22-139.178.89.65:58042.service: Deactivated successfully. Nov 13 08:29:37.053653 systemd[1]: session-26.scope: Deactivated successfully. Nov 13 08:29:37.065295 systemd-logind[1452]: Session 26 logged out. Waiting for processes to exit. Nov 13 08:29:37.075543 systemd[1]: Started sshd@26-209.38.128.242:22-139.178.89.65:41862.service - OpenSSH per-connection server daemon (139.178.89.65:41862). Nov 13 08:29:37.080569 systemd-logind[1452]: Removed session 26. Nov 13 08:29:37.127004 kubelet[2667]: I1113 08:29:37.125850 2667 topology_manager.go:215] "Topology Admit Handler" podUID="3a4e38d7-5d26-4631-864d-5225e1e95ea6" podNamespace="kube-system" podName="cilium-gqm7b" Nov 13 08:29:37.129039 kubelet[2667]: E1113 08:29:37.128635 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19e53ac0-5b84-459e-bba2-e0a2e93677d4" containerName="cilium-operator" Nov 13 08:29:37.129039 kubelet[2667]: E1113 08:29:37.128708 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15ba1b0d-69b7-450f-9421-04bef59857dc" containerName="mount-bpf-fs" Nov 13 08:29:37.129039 kubelet[2667]: E1113 08:29:37.128723 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15ba1b0d-69b7-450f-9421-04bef59857dc" containerName="clean-cilium-state" Nov 13 08:29:37.129039 kubelet[2667]: E1113 08:29:37.128739 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15ba1b0d-69b7-450f-9421-04bef59857dc" containerName="cilium-agent" Nov 13 08:29:37.129039 kubelet[2667]: E1113 08:29:37.128750 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15ba1b0d-69b7-450f-9421-04bef59857dc" containerName="mount-cgroup" Nov 13 08:29:37.129039 kubelet[2667]: E1113 08:29:37.128759 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15ba1b0d-69b7-450f-9421-04bef59857dc" containerName="apply-sysctl-overwrites" Nov 13 08:29:37.129039 kubelet[2667]: I1113 08:29:37.128832 2667 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ba1b0d-69b7-450f-9421-04bef59857dc" containerName="cilium-agent" Nov 13 08:29:37.129039 kubelet[2667]: I1113 08:29:37.128843 2667 memory_manager.go:354] "RemoveStaleState removing state" podUID="19e53ac0-5b84-459e-bba2-e0a2e93677d4" containerName="cilium-operator" Nov 13 08:29:37.188348 systemd[1]: Created slice kubepods-burstable-pod3a4e38d7_5d26_4631_864d_5225e1e95ea6.slice - libcontainer container kubepods-burstable-pod3a4e38d7_5d26_4631_864d_5225e1e95ea6.slice. Nov 13 08:29:37.225797 sshd[4441]: Accepted publickey for core from 139.178.89.65 port 41862 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:37.230875 sshd-session[4441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:37.247140 systemd-logind[1452]: New session 27 of user core. Nov 13 08:29:37.253411 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 13 08:29:37.268193 kubelet[2667]: I1113 08:29:37.265639 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3a4e38d7-5d26-4631-864d-5225e1e95ea6-cilium-ipsec-secrets\") pod \"cilium-gqm7b\" (UID: \"3a4e38d7-5d26-4631-864d-5225e1e95ea6\") " pod="kube-system/cilium-gqm7b" Nov 13 08:29:37.268193 kubelet[2667]: I1113 08:29:37.265719 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3a4e38d7-5d26-4631-864d-5225e1e95ea6-cilium-run\") pod \"cilium-gqm7b\" (UID: \"3a4e38d7-5d26-4631-864d-5225e1e95ea6\") " pod="kube-system/cilium-gqm7b" Nov 13 08:29:37.268193 kubelet[2667]: I1113 08:29:37.265761 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3a4e38d7-5d26-4631-864d-5225e1e95ea6-hostproc\") pod \"cilium-gqm7b\" (UID: \"3a4e38d7-5d26-4631-864d-5225e1e95ea6\") " pod="kube-system/cilium-gqm7b" Nov 13 08:29:37.268193 kubelet[2667]: I1113 08:29:37.265788 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3a4e38d7-5d26-4631-864d-5225e1e95ea6-cni-path\") pod \"cilium-gqm7b\" (UID: \"3a4e38d7-5d26-4631-864d-5225e1e95ea6\") " pod="kube-system/cilium-gqm7b" Nov 13 08:29:37.268193 kubelet[2667]: I1113 08:29:37.265818 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3a4e38d7-5d26-4631-864d-5225e1e95ea6-bpf-maps\") pod \"cilium-gqm7b\" (UID: \"3a4e38d7-5d26-4631-864d-5225e1e95ea6\") " pod="kube-system/cilium-gqm7b" Nov 13 08:29:37.268193 kubelet[2667]: I1113 08:29:37.265847 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3a4e38d7-5d26-4631-864d-5225e1e95ea6-cilium-cgroup\") pod \"cilium-gqm7b\" (UID: \"3a4e38d7-5d26-4631-864d-5225e1e95ea6\") " pod="kube-system/cilium-gqm7b" Nov 13 08:29:37.268704 kubelet[2667]: I1113 08:29:37.265876 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3a4e38d7-5d26-4631-864d-5225e1e95ea6-etc-cni-netd\") pod \"cilium-gqm7b\" (UID: \"3a4e38d7-5d26-4631-864d-5225e1e95ea6\") " pod="kube-system/cilium-gqm7b" Nov 13 08:29:37.268704 kubelet[2667]: I1113 08:29:37.266129 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a4e38d7-5d26-4631-864d-5225e1e95ea6-lib-modules\") pod \"cilium-gqm7b\" (UID: \"3a4e38d7-5d26-4631-864d-5225e1e95ea6\") " pod="kube-system/cilium-gqm7b" Nov 13 08:29:37.268704 kubelet[2667]: I1113 08:29:37.266177 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a4e38d7-5d26-4631-864d-5225e1e95ea6-xtables-lock\") pod \"cilium-gqm7b\" (UID: \"3a4e38d7-5d26-4631-864d-5225e1e95ea6\") " pod="kube-system/cilium-gqm7b" Nov 13 08:29:37.268704 kubelet[2667]: I1113 08:29:37.266328 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3a4e38d7-5d26-4631-864d-5225e1e95ea6-host-proc-sys-kernel\") pod \"cilium-gqm7b\" (UID: \"3a4e38d7-5d26-4631-864d-5225e1e95ea6\") " pod="kube-system/cilium-gqm7b" Nov 13 08:29:37.268704 kubelet[2667]: I1113 08:29:37.266407 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz2lf\" (UniqueName: \"kubernetes.io/projected/3a4e38d7-5d26-4631-864d-5225e1e95ea6-kube-api-access-hz2lf\") pod \"cilium-gqm7b\" (UID: \"3a4e38d7-5d26-4631-864d-5225e1e95ea6\") " pod="kube-system/cilium-gqm7b" Nov 13 08:29:37.268967 kubelet[2667]: I1113 08:29:37.266462 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3a4e38d7-5d26-4631-864d-5225e1e95ea6-clustermesh-secrets\") pod \"cilium-gqm7b\" (UID: \"3a4e38d7-5d26-4631-864d-5225e1e95ea6\") " pod="kube-system/cilium-gqm7b" Nov 13 08:29:37.268967 kubelet[2667]: I1113 08:29:37.266509 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3a4e38d7-5d26-4631-864d-5225e1e95ea6-host-proc-sys-net\") pod \"cilium-gqm7b\" (UID: \"3a4e38d7-5d26-4631-864d-5225e1e95ea6\") " pod="kube-system/cilium-gqm7b" Nov 13 08:29:37.268967 kubelet[2667]: I1113 08:29:37.266546 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3a4e38d7-5d26-4631-864d-5225e1e95ea6-hubble-tls\") pod \"cilium-gqm7b\" (UID: \"3a4e38d7-5d26-4631-864d-5225e1e95ea6\") " pod="kube-system/cilium-gqm7b" Nov 13 08:29:37.268967 kubelet[2667]: I1113 08:29:37.266599 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3a4e38d7-5d26-4631-864d-5225e1e95ea6-cilium-config-path\") pod \"cilium-gqm7b\" (UID: \"3a4e38d7-5d26-4631-864d-5225e1e95ea6\") " pod="kube-system/cilium-gqm7b" Nov 13 08:29:37.325132 sshd[4443]: Connection closed by 139.178.89.65 port 41862 Nov 13 08:29:37.324803 sshd-session[4441]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:37.347383 systemd[1]: sshd@26-209.38.128.242:22-139.178.89.65:41862.service: Deactivated successfully. Nov 13 08:29:37.351677 systemd[1]: session-27.scope: Deactivated successfully. Nov 13 08:29:37.353857 systemd-logind[1452]: Session 27 logged out. Waiting for processes to exit. Nov 13 08:29:37.362769 systemd[1]: Started sshd@27-209.38.128.242:22-139.178.89.65:41872.service - OpenSSH per-connection server daemon (139.178.89.65:41872). Nov 13 08:29:37.367892 systemd-logind[1452]: Removed session 27. Nov 13 08:29:37.491802 sshd[4449]: Accepted publickey for core from 139.178.89.65 port 41872 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:37.493746 sshd-session[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:37.498202 kubelet[2667]: E1113 08:29:37.497710 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:29:37.501449 containerd[1479]: time="2024-11-13T08:29:37.501251214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gqm7b,Uid:3a4e38d7-5d26-4631-864d-5225e1e95ea6,Namespace:kube-system,Attempt:0,}" Nov 13 08:29:37.504807 systemd-logind[1452]: New session 28 of user core. Nov 13 08:29:37.512314 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 13 08:29:37.567250 containerd[1479]: time="2024-11-13T08:29:37.566763384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:29:37.567250 containerd[1479]: time="2024-11-13T08:29:37.566881381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:29:37.567250 containerd[1479]: time="2024-11-13T08:29:37.566907899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:29:37.568301 containerd[1479]: time="2024-11-13T08:29:37.567846740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:29:37.621718 systemd[1]: Started cri-containerd-8387bccbac6ea40b5f07c1b5816677b0ee0c515e600ee3a44fe79930b5038455.scope - libcontainer container 8387bccbac6ea40b5f07c1b5816677b0ee0c515e600ee3a44fe79930b5038455. Nov 13 08:29:37.690791 containerd[1479]: time="2024-11-13T08:29:37.689930129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gqm7b,Uid:3a4e38d7-5d26-4631-864d-5225e1e95ea6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8387bccbac6ea40b5f07c1b5816677b0ee0c515e600ee3a44fe79930b5038455\"" Nov 13 08:29:37.692351 kubelet[2667]: E1113 08:29:37.691862 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:29:37.705685 containerd[1479]: time="2024-11-13T08:29:37.703809550Z" level=info msg="CreateContainer within sandbox \"8387bccbac6ea40b5f07c1b5816677b0ee0c515e600ee3a44fe79930b5038455\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 13 08:29:37.743267 containerd[1479]: time="2024-11-13T08:29:37.743168301Z" level=info msg="CreateContainer within sandbox \"8387bccbac6ea40b5f07c1b5816677b0ee0c515e600ee3a44fe79930b5038455\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e3829eef4d8c4f1e50a34a06cec786a8c49ecaf6ef526f8a3bcfca5cacbc8581\"" Nov 13 08:29:37.749015 containerd[1479]: time="2024-11-13T08:29:37.745067338Z" level=info msg="StartContainer for \"e3829eef4d8c4f1e50a34a06cec786a8c49ecaf6ef526f8a3bcfca5cacbc8581\"" Nov 13 08:29:37.812447 systemd[1]: Started cri-containerd-e3829eef4d8c4f1e50a34a06cec786a8c49ecaf6ef526f8a3bcfca5cacbc8581.scope - libcontainer container e3829eef4d8c4f1e50a34a06cec786a8c49ecaf6ef526f8a3bcfca5cacbc8581. Nov 13 08:29:37.895448 containerd[1479]: time="2024-11-13T08:29:37.895363043Z" level=info msg="StartContainer for \"e3829eef4d8c4f1e50a34a06cec786a8c49ecaf6ef526f8a3bcfca5cacbc8581\" returns successfully" Nov 13 08:29:37.914555 kubelet[2667]: I1113 08:29:37.914470 2667 setters.go:580] "Node became not ready" node="ci-4152.0.0-d-03c8fd271e" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-11-13T08:29:37Z","lastTransitionTime":"2024-11-13T08:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 13 08:29:37.922656 systemd[1]: cri-containerd-e3829eef4d8c4f1e50a34a06cec786a8c49ecaf6ef526f8a3bcfca5cacbc8581.scope: Deactivated successfully. Nov 13 08:29:38.007668 containerd[1479]: time="2024-11-13T08:29:38.007555788Z" level=info msg="shim disconnected" id=e3829eef4d8c4f1e50a34a06cec786a8c49ecaf6ef526f8a3bcfca5cacbc8581 namespace=k8s.io Nov 13 08:29:38.007668 containerd[1479]: time="2024-11-13T08:29:38.007645851Z" level=warning msg="cleaning up after shim disconnected" id=e3829eef4d8c4f1e50a34a06cec786a8c49ecaf6ef526f8a3bcfca5cacbc8581 namespace=k8s.io Nov 13 08:29:38.007668 containerd[1479]: time="2024-11-13T08:29:38.007660698Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:29:38.482574 kubelet[2667]: E1113 08:29:38.482360 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:29:38.490139 containerd[1479]: time="2024-11-13T08:29:38.489578813Z" level=info msg="CreateContainer within sandbox \"8387bccbac6ea40b5f07c1b5816677b0ee0c515e600ee3a44fe79930b5038455\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 13 08:29:38.523821 containerd[1479]: time="2024-11-13T08:29:38.522604895Z" level=info msg="CreateContainer within sandbox \"8387bccbac6ea40b5f07c1b5816677b0ee0c515e600ee3a44fe79930b5038455\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9f9487aa3786b8ce2f848d550405880b8fdcabdb0b5a82f0ab43c2c6940712e4\"" Nov 13 08:29:38.526879 containerd[1479]: time="2024-11-13T08:29:38.525141976Z" level=info msg="StartContainer for \"9f9487aa3786b8ce2f848d550405880b8fdcabdb0b5a82f0ab43c2c6940712e4\"" Nov 13 08:29:38.583361 systemd[1]: Started cri-containerd-9f9487aa3786b8ce2f848d550405880b8fdcabdb0b5a82f0ab43c2c6940712e4.scope - libcontainer container 9f9487aa3786b8ce2f848d550405880b8fdcabdb0b5a82f0ab43c2c6940712e4. Nov 13 08:29:38.634886 containerd[1479]: time="2024-11-13T08:29:38.634293003Z" level=info msg="StartContainer for \"9f9487aa3786b8ce2f848d550405880b8fdcabdb0b5a82f0ab43c2c6940712e4\" returns successfully" Nov 13 08:29:38.647409 systemd[1]: cri-containerd-9f9487aa3786b8ce2f848d550405880b8fdcabdb0b5a82f0ab43c2c6940712e4.scope: Deactivated successfully. Nov 13 08:29:38.696573 containerd[1479]: time="2024-11-13T08:29:38.696462159Z" level=info msg="shim disconnected" id=9f9487aa3786b8ce2f848d550405880b8fdcabdb0b5a82f0ab43c2c6940712e4 namespace=k8s.io Nov 13 08:29:38.696963 containerd[1479]: time="2024-11-13T08:29:38.696704337Z" level=warning msg="cleaning up after shim disconnected" id=9f9487aa3786b8ce2f848d550405880b8fdcabdb0b5a82f0ab43c2c6940712e4 namespace=k8s.io Nov 13 08:29:38.696963 containerd[1479]: time="2024-11-13T08:29:38.696721863Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:29:39.390153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f9487aa3786b8ce2f848d550405880b8fdcabdb0b5a82f0ab43c2c6940712e4-rootfs.mount: Deactivated successfully. Nov 13 08:29:39.491421 kubelet[2667]: E1113 08:29:39.489226 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:29:39.496244 containerd[1479]: time="2024-11-13T08:29:39.496148562Z" level=info msg="CreateContainer within sandbox \"8387bccbac6ea40b5f07c1b5816677b0ee0c515e600ee3a44fe79930b5038455\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 13 08:29:39.541204 containerd[1479]: time="2024-11-13T08:29:39.541122929Z" level=info msg="CreateContainer within sandbox \"8387bccbac6ea40b5f07c1b5816677b0ee0c515e600ee3a44fe79930b5038455\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5b04231255e456c97cf486c80753f5e2fbee176dcd37c49f787c382fb1048478\"" Nov 13 08:29:39.545086 containerd[1479]: time="2024-11-13T08:29:39.542999615Z" level=info msg="StartContainer for \"5b04231255e456c97cf486c80753f5e2fbee176dcd37c49f787c382fb1048478\"" Nov 13 08:29:39.606385 systemd[1]: Started cri-containerd-5b04231255e456c97cf486c80753f5e2fbee176dcd37c49f787c382fb1048478.scope - libcontainer container 5b04231255e456c97cf486c80753f5e2fbee176dcd37c49f787c382fb1048478. Nov 13 08:29:39.667225 containerd[1479]: time="2024-11-13T08:29:39.667038123Z" level=info msg="StartContainer for \"5b04231255e456c97cf486c80753f5e2fbee176dcd37c49f787c382fb1048478\" returns successfully" Nov 13 08:29:39.674142 systemd[1]: cri-containerd-5b04231255e456c97cf486c80753f5e2fbee176dcd37c49f787c382fb1048478.scope: Deactivated successfully. Nov 13 08:29:39.714721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b04231255e456c97cf486c80753f5e2fbee176dcd37c49f787c382fb1048478-rootfs.mount: Deactivated successfully. Nov 13 08:29:39.720218 containerd[1479]: time="2024-11-13T08:29:39.720118137Z" level=info msg="shim disconnected" id=5b04231255e456c97cf486c80753f5e2fbee176dcd37c49f787c382fb1048478 namespace=k8s.io Nov 13 08:29:39.720218 containerd[1479]: time="2024-11-13T08:29:39.720196328Z" level=warning msg="cleaning up after shim disconnected" id=5b04231255e456c97cf486c80753f5e2fbee176dcd37c49f787c382fb1048478 namespace=k8s.io Nov 13 08:29:39.720218 containerd[1479]: time="2024-11-13T08:29:39.720209715Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:29:40.496884 kubelet[2667]: E1113 08:29:40.496836 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:29:40.500580 containerd[1479]: time="2024-11-13T08:29:40.500483357Z" level=info msg="CreateContainer within sandbox \"8387bccbac6ea40b5f07c1b5816677b0ee0c515e600ee3a44fe79930b5038455\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 13 08:29:40.557956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3667552917.mount: Deactivated successfully. Nov 13 08:29:40.583030 containerd[1479]: time="2024-11-13T08:29:40.581869616Z" level=info msg="CreateContainer within sandbox \"8387bccbac6ea40b5f07c1b5816677b0ee0c515e600ee3a44fe79930b5038455\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"00a2a4a614b1a7edf5b50fd89c37ea4d482b5783513234631e872d9740e88139\"" Nov 13 08:29:40.585087 containerd[1479]: time="2024-11-13T08:29:40.585026009Z" level=info msg="StartContainer for \"00a2a4a614b1a7edf5b50fd89c37ea4d482b5783513234631e872d9740e88139\"" Nov 13 08:29:40.648472 systemd[1]: Started cri-containerd-00a2a4a614b1a7edf5b50fd89c37ea4d482b5783513234631e872d9740e88139.scope - libcontainer container 00a2a4a614b1a7edf5b50fd89c37ea4d482b5783513234631e872d9740e88139. Nov 13 08:29:40.706134 systemd[1]: cri-containerd-00a2a4a614b1a7edf5b50fd89c37ea4d482b5783513234631e872d9740e88139.scope: Deactivated successfully. Nov 13 08:29:40.716735 containerd[1479]: time="2024-11-13T08:29:40.716597781Z" level=info msg="StartContainer for \"00a2a4a614b1a7edf5b50fd89c37ea4d482b5783513234631e872d9740e88139\" returns successfully" Nov 13 08:29:40.762807 containerd[1479]: time="2024-11-13T08:29:40.762497052Z" level=info msg="shim disconnected" id=00a2a4a614b1a7edf5b50fd89c37ea4d482b5783513234631e872d9740e88139 namespace=k8s.io Nov 13 08:29:40.762807 containerd[1479]: time="2024-11-13T08:29:40.762608305Z" level=warning msg="cleaning up after shim disconnected" id=00a2a4a614b1a7edf5b50fd89c37ea4d482b5783513234631e872d9740e88139 namespace=k8s.io Nov 13 08:29:40.762807 containerd[1479]: time="2024-11-13T08:29:40.762622710Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:29:40.956512 kubelet[2667]: E1113 08:29:40.956446 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:29:41.259673 kubelet[2667]: E1113 08:29:41.259590 2667 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 13 08:29:41.507299 kubelet[2667]: E1113 08:29:41.505593 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:29:41.517789 containerd[1479]: time="2024-11-13T08:29:41.517283082Z" level=info msg="CreateContainer within sandbox \"8387bccbac6ea40b5f07c1b5816677b0ee0c515e600ee3a44fe79930b5038455\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 13 08:29:41.544728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00a2a4a614b1a7edf5b50fd89c37ea4d482b5783513234631e872d9740e88139-rootfs.mount: Deactivated successfully. Nov 13 08:29:41.550178 containerd[1479]: time="2024-11-13T08:29:41.550028043Z" level=info msg="CreateContainer within sandbox \"8387bccbac6ea40b5f07c1b5816677b0ee0c515e600ee3a44fe79930b5038455\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"30eddf61cef406ea3b1e06382692ee432711bf2a9fe673f0e8f29e9826e63689\"" Nov 13 08:29:41.554755 containerd[1479]: time="2024-11-13T08:29:41.553249041Z" level=info msg="StartContainer for \"30eddf61cef406ea3b1e06382692ee432711bf2a9fe673f0e8f29e9826e63689\"" Nov 13 08:29:41.634432 systemd[1]: Started cri-containerd-30eddf61cef406ea3b1e06382692ee432711bf2a9fe673f0e8f29e9826e63689.scope - libcontainer container 30eddf61cef406ea3b1e06382692ee432711bf2a9fe673f0e8f29e9826e63689. Nov 13 08:29:41.704713 containerd[1479]: time="2024-11-13T08:29:41.704340788Z" level=info msg="StartContainer for \"30eddf61cef406ea3b1e06382692ee432711bf2a9fe673f0e8f29e9826e63689\" returns successfully" Nov 13 08:29:41.960827 kubelet[2667]: E1113 08:29:41.960511 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:29:42.392067 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 13 08:29:42.516129 kubelet[2667]: E1113 08:29:42.514649 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:29:42.562152 kubelet[2667]: I1113 08:29:42.562029 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gqm7b" podStartSLOduration=5.561220313 podStartE2EDuration="5.561220313s" podCreationTimestamp="2024-11-13 08:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:29:42.561058756 +0000 UTC m=+96.818542689" watchObservedRunningTime="2024-11-13 08:29:42.561220313 +0000 UTC m=+96.818704065" Nov 13 08:29:43.520546 kubelet[2667]: E1113 08:29:43.520444 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:29:44.318977 systemd[1]: run-containerd-runc-k8s.io-30eddf61cef406ea3b1e06382692ee432711bf2a9fe673f0e8f29e9826e63689-runc.obXrhu.mount: Deactivated successfully. Nov 13 08:29:46.875158 systemd-networkd[1375]: lxc_health: Link UP Nov 13 08:29:46.907998 systemd-networkd[1375]: lxc_health: Gained carrier Nov 13 08:29:47.505484 kubelet[2667]: E1113 08:29:47.505126 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:29:47.537368 kubelet[2667]: E1113 08:29:47.537068 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:29:48.793233 systemd-networkd[1375]: lxc_health: Gained IPv6LL Nov 13 08:29:49.100779 systemd[1]: run-containerd-runc-k8s.io-30eddf61cef406ea3b1e06382692ee432711bf2a9fe673f0e8f29e9826e63689-runc.o9AY0o.mount: Deactivated successfully. Nov 13 08:29:51.534787 kubelet[2667]: E1113 08:29:51.534522 2667 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:43924->127.0.0.1:45973: write tcp 127.0.0.1:43924->127.0.0.1:45973: write: broken pipe Nov 13 08:29:53.801550 sshd[4455]: Connection closed by 139.178.89.65 port 41872 Nov 13 08:29:53.803462 sshd-session[4449]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:53.822681 systemd-logind[1452]: Session 28 logged out. Waiting for processes to exit. Nov 13 08:29:53.827388 systemd[1]: sshd@27-209.38.128.242:22-139.178.89.65:41872.service: Deactivated successfully. Nov 13 08:29:53.833193 systemd[1]: session-28.scope: Deactivated successfully. Nov 13 08:29:53.837815 systemd-logind[1452]: Removed session 28.